TY - JOUR
T1 - Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards
AU - Jagodnik, Kathleen M.
AU - Thomas, Philip S.
AU - Van Den Bogert, Antonie J.
AU - Branicky, Michael S.
AU - Kirsch, Robert F.
N1 - Publisher Copyright:
© 2001-2011 IEEE.
PY - 2017/10
Y1 - 2017/10
N2 - Functional Electrical Stimulation (FES) employs neuroprostheses to apply electrical current to the nerves and muscles of individuals paralyzed by spinal cord injury to restore voluntary movement. Neuroprosthesis controllers calculate stimulation patterns to produce desired actions. To date, no existing controller is able to efficiently adapt its control strategy to the wide range of possible physiological arm characteristics, reaching movements, and user preferences that vary over time. Reinforcement learning (RL) is a control strategy that can incorporate human reward signals as inputs to allow human users to shape controller behavior. In this paper, ten neurologically intact human participants assigned subjective numerical rewards to train RL controllers, evaluating animations of goal-oriented reaching tasks performed using a planar musculoskeletal human arm simulation. The RL controller learning achieved using human trainers was compared with learning accomplished using human-like rewards generated by an algorithm; metrics included success at reaching the specified target; time required to reach the target; and target overshoot. Both sets of controllers learned efficiently and with minimal differences, significantly outperforming standard controllers. Reward positivity and consistency were found to be unrelated to learning success. These results suggest that human rewards can be used effectively to train RL-based FES controllers.
AB - Functional Electrical Stimulation (FES) employs neuroprostheses to apply electrical current to the nerves and muscles of individuals paralyzed by spinal cord injury to restore voluntary movement. Neuroprosthesis controllers calculate stimulation patterns to produce desired actions. To date, no existing controller is able to efficiently adapt its control strategy to the wide range of possible physiological arm characteristics, reaching movements, and user preferences that vary over time. Reinforcement learning (RL) is a control strategy that can incorporate human reward signals as inputs to allow human users to shape controller behavior. In this paper, ten neurologically intact human participants assigned subjective numerical rewards to train RL controllers, evaluating animations of goal-oriented reaching tasks performed using a planar musculoskeletal human arm simulation. The RL controller learning achieved using human trainers was compared with learning accomplished using human-like rewards generated by an algorithm; metrics included success at reaching the specified target; time required to reach the target; and target overshoot. Both sets of controllers learned efficiently and with minimal differences, significantly outperforming standard controllers. Reward positivity and consistency were found to be unrelated to learning success. These results suggest that human rewards can be used effectively to train RL-based FES controllers.
KW - Artificial intelligence
KW - Functional Electrical Stimulation
KW - human-machine teaming
KW - rehabilitation
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85032944991&partnerID=8YFLogxK
U2 - 10.1109/TNSRE.2017.2700395
DO - 10.1109/TNSRE.2017.2700395
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 28475063
AN - SCOPUS:85032944991
SN - 1534-4320
VL - 25
SP - 1892
EP - 1905
JO - IEEE Transactions on Neural Systems and Rehabilitation Engineering
JF - IEEE Transactions on Neural Systems and Rehabilitation Engineering
IS - 10
M1 - 7917366
ER -