Human-Like Rewards to Train a Reinforcement Learning Controller for Planar Arm Movement

Kathleen M. Jagodnik, Philip S. Thomas, Antonie J. Van Den Bogert, Michael S. Branicky, Robert F. Kirsch

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

High-level spinal cord injury (SCI) in humans causes paralysis below the neck. Functional electrical stimulation (FES) technology applies electrical current to nerves and muscles to restore movement, and controllers for upper extremity FES neuroprostheses calculate stimulation patterns to produce desired arm movement. However, currently available FES controllers have yet to restore natural movements. Reinforcement learning (RL) is a reward-driven control technique; it can employ user-generated rewards, and human preferences can be used in training. To test this concept with FES, we conducted simulation experiments using computer-generated 'pseudo-human' rewards. Rewards with varying properties were used with an actor-critic RL controller for a planar two-degree-of-freedom biomechanical human arm model performing reaching movements. Results demonstrate that sparse, delayed pseudo-human rewards permit stable and effective RL controller learning. The frequency of reward is proportional to learning success, and human-scale sparse rewards permit greater learning than exclusively automated rewards. Diversity of training task sets did not affect learning. Long-term stability of trained controllers was observed. Using human-generated rewards to train RL controllers for upper-extremity FES systems may be useful. Our findings represent progress toward achieving human-machine teaming in control of upper-extremity FES systems for more natural arm movements based on human user preferences and RL algorithm learning capabilities.

Original languageEnglish
Article number7478097
Pages (from-to)723-733
Number of pages11
JournalIEEE Transactions on Human-Machine Systems
Volume46
Issue number5
DOIs
StatePublished - Oct 2016
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Funding

This work was supported by the National Institutes of Health fellowship #TRN030167, Veterans Administration Rehabilitation Research and Development predoctoral fellowship "Reinforcement Learning Control for an Upper-Extremity Neuroprosthesis," NIH Training Grant T32 EB004314, and Ardiem Medical Arm Control Device Grant #W81XWH0720044.

FundersFunder number
Ardiem Medical Arm Control Device81XWH0720044
Veterans Administration Rehabilitation Research and Development
National Institutes of Health030167, T32 EB004314

    Keywords

    • Control
    • functional electrical stimulation (FES)
    • human-machine teaming
    • modeling
    • rehabilitation
    • reinforcement learning (RL)
    • simulation
    • upper extremity

    Fingerprint

    Dive into the research topics of 'Human-Like Rewards to Train a Reinforcement Learning Controller for Planar Arm Movement'. Together they form a unique fingerprint.

    Cite this