paint-brush
Weight Distribution Estimation in Lower-Limb Exoskeletons via Deep Learning: Methods by@exoself

Weight Distribution Estimation in Lower-Limb Exoskeletons via Deep Learning: Methods

by ExoselfJuly 3rd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We validated our proposed method on a commercially available lower-limb exoskeleton (ExoMotus-X2, Fourier Intelligence, Singapore) In this work, we calculated the weight distribution as the ratio of vertical GRF of each foot similar to 7. The method is implemented on a CANOpen Robot Controller (15) platform.
featured image - Weight Distribution Estimation in Lower-Limb Exoskeletons via Deep Learning: Methods
Exoself HackerNoon profile picture

Authors:

(1) Clement Lhos, Legs and Walking Lab of Shirley Ryan AbilityLab, Chicago, IL, USA;

(2) Emek Barıs¸ Kuc¸uktabak, Legs and Walking Lab of Shirley Ryan AbilityLab, Chicago, IL, USA and Center for Robotics and Biosystems, Northwestern University, Evanston, IL, USA;

(3) Lorenzo Vianello, Legs and Walking Lab of Shirley Ryan AbilityLab, Chicago, IL, USA;

(4) Lorenzo Amato, Legs and Walking Lab of Shirley Ryan AbilityLab, Chicago, IL, USA and The Biorobotics Institute, Scuola Superiore Sant’Anna, 56025 Pontedera, Italy and Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy;

(5) Matthew R. Short, Legs and Walking Lab of Shirley Ryan AbilityLab, Chicago, IL, USA and Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA;

(6) Kevin Lynch2, Center for Robotics and Biosystems, Northwestern University, Evanston, IL, USA;

(7) Jose L. Pons, Legs and Walking Lab of Shirley Ryan AbilityLab, Chicago, IL, USA, Center for Robotics and Biosystems, Northwestern University, Evanston, IL, USA and Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA.

Abstract and I Introduction

II Methods

III Results

IV Discussion

V. Conclusion, Acknowledgment, and References

II. METHODS

A. Exoskeleton


We validated our proposed method on a commercially available lower-limb exoskeleton (ExoMotus-X2, Fourier Intelligence, Singapore). The exoskeleton system has four active degrees of freedom (DoF) at the hip and knee, and two passive DoFs at the ankle (Fig. 1B). The exoskeleton was modified to include strain gauges to estimate joint torques, and an Inertial Measurement Unit (IMU) on the backpack to measure the orientation of the trunk [7].


In this study, to measure GRFs, and therefore calculate weight distribution, two distinct devices were employed: a sensorized, split-belt treadmill and force-sensitive resistor (FSR) footplates. The treadmill is equipped with eight 3-DoF force plates (9047B, Kistler), offering highly accurate readings and a simple calibration process to zero the measured forces. However, its use restricts the practical applications of the exoskeleton and prevents its integration into overground scenarios. To address this limitation, FSR footplates were installed beneath the soles of the exoskeleton, facilitating the measurement of forces between the user-exoskeleton couple and the ground. Each footplate is equipped with 16 force-sensitive resistors, amounting to a total of 32 sensors across both footplates. These sensors interface with the ground through a rigid aluminum sole and rubber bearings, adding a degree of compliance. Although these sensors are more complex to calibrate and set up in comparison to the treadmill, they allow the exoskeleton to be used during overground walking.


GRFs enable the calculation of weight distribution in the user-exoskeleton couple. This allows an accurate and smooth transition between the left and right stance dynamic models of the exoskeleton (Fig. 1A). In this work, we calculated the weight distribution as the ratio of the vertical GRF of each foot similar to [7],



Communication between motors and sensors is achieved using the CANOpen protocol. The controller is implemented on a ROS and C++ based open-source platform called the CANOpen Robot Controller (CORC) [15].


B. Weight Distribution via Deep-Learning Regression




The model was trained using the ADAM optimizer [18]. We utilized the MSE loss function to measure and minimize the discrepancy between the predicted (αˆ) and actual (α) values obtained from treadmill measurements. The training process was conducted over 10 epochs, processing the data in batches of 256 samples, to allow efficient weight adjustments while ensuring comprehensive coverage of the data. The LSTM model was trained using the TensorFlow library (v2.15.0, Google); we employed TensorFlow Lite (v.2.13.0, Google) to make predictions in real time. The converted lite model is called by a Python script that runs in parallel to CORC in C++. The communication between these two components is achieved via ROS.



After the offline training and validation of prediction capability, we evaluated the capability of the LSTM network for closed-loop predictions of the stance interpolation factor αˆ in two experiments.


In the first experiment, three users wore the exoskeleton and walked for one minute on a sensorized treadmill for each mode (i.e., haptic transparency and rendering). The stance interpolation factor for the exoskeleton controller was calculated either using the treadmill force plates as ground truth (α), or estimated using the LSTM model (αˆ ). The measured or estimated value was used as an input to the WECC controller for interaction torque estimation and closed-loop compensation control in real-time [7]. The performance of the proposed model was assessed by comparing the spatiotemporal gait features and resultant interaction torque errors. Reported and visualized interaction torque errors in the plots are calculated using the ground truth α obtained from the treadmill force plates, and divided by each user’s weight. We report all performance values as the mean ± standard deviation across users and steps unless otherwise specified. To assess the network’s ability to generalize to unobserved walking conditions and different controllers, we evaluated the model on three speeds (0.14 m/s, 0.25 m/s, and 0.47 m/s) and two exoskeleton control modes (haptic transparency and rendering). Moreover, two of the three test users were not in the training set. Inter-subject variability is removed from the presented box plots by demeaning the data of each condition/subject pair and subsequently adding the overall mean of each condition, including three subjects.


In the second experiment, two users who were not featured in the training set performed trials of overground walking. The deep-learning approach was trained on a dataset of healthy individuals walking on a treadmill, thus we aim to demonstrate the model’s ability to generalize to more naturalistic walking behaviors. In this experiment, α was calculated using the FSR footplates, while αˆ was calculated using the deep-learning approach. The users walked for 10 meters at a self-selected speed, repeated three times for



TABLE I: Prediction accuracy for the LSTM model (R2 for training and testing set) with time window of 300 ms and instantaneous values. We performed a cross-user generalization evaluation by iteratively training on all users (USi) except one and testing on the excluded user.


each condition; interaction torques and resultant joint angles were used to assess the performance.


This paper is under CC BY 4.0 DEED license.


바카라사이트 바카라사이트 온라인바카라