Abstract
High-level spinal cord injury (SCI) in humans causes paralysis below the neck. Functional electrical stimulation (FES) technology applies electrical current to nerves and muscles to restore movement, and controllers for upper extremity FES neuroprostheses calculate stimulation patterns to produce desired arm movement. However, currently available FES controllers have yet to restore natural movements. Reinforcement learning (RL) is a reward-driven control technique; it can employ user-generated rewards, and human preferences can be used in training. To test this concept with FES, we conducted simulation experiments using computer-generated 'pseudo-human' rewards. Rewards with varying properties were used with an actor-critic RL controller for a planar two-degree-of-freedom biomechanical human arm model performing reaching movements. Results demonstrate that sparse, delayed pseudo-human rewards permit stable and effective RL controller learning. The frequency of reward is proportional to learning success, and human-scale sparse rewards permit greater learning than exclusively automated rewards. Diversity of training task sets did not affect learning. Long-term stability of trained controllers was observed. Using human-generated rewards to train RL controllers for upper-extremity FES systems may be useful. Our findings represent progress toward achieving human-machine teaming in control of upper-extremity FES systems for more natural arm movements based on human user preferences and RL algorithm learning capabilities.
Original language | English |
---|---|
Article number | 7478097 |
Pages (from-to) | 723-733 |
Number of pages | 11 |
Journal | IEEE Transactions on Human-Machine Systems |
Volume | 46 |
Issue number | 5 |
DOIs | |
State | Published - Oct 2016 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2016 IEEE.
Funding
This work was supported by the National Institutes of Health fellowship #TRN030167, Veterans Administration Rehabilitation Research and Development predoctoral fellowship "Reinforcement Learning Control for an Upper-Extremity Neuroprosthesis," NIH Training Grant T32 EB004314, and Ardiem Medical Arm Control Device Grant #W81XWH0720044.
Funders | Funder number |
---|---|
Ardiem Medical Arm Control Device | 81XWH0720044 |
Veterans Administration Rehabilitation Research and Development | |
National Institutes of Health | 030167, T32 EB004314 |
Keywords
- Control
- functional electrical stimulation (FES)
- human-machine teaming
- modeling
- rehabilitation
- reinforcement learning (RL)
- simulation
- upper extremity