The seeing-eye robot grand challenge: Rethinking automated care

Reuth Mirsky, Peter Stone

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Automated care systems are becoming more tangible than ever: recent breakthroughs in robotics and machine learning can be used to address the need for automated care created by the increasing aging population. However, such systems require overcoming several technological, ethical, and social challenges. One inspirational manifestation of these challenges can be observed in the training of seeing-eye dogs for visually impaired people. A seeing-eye dog is not just trained to obey its owner, but also to “intelligently disobey”: if it is given an unsafe command from its handler, it is taught to disobey it or even insist on a different course of action. This paper proposes the challenge of building a seeing-eye robot, as a thought-provoking use-case that helps identify the challenges to be faced when creating behaviors for robot assistants in general. Through this challenge, this paper delineates the prerequisites that an automated care system will need to have in order to perform intelligent disobedience and to serve as a true agent for its handler.

Original languageEnglish
Title of host publication20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages28-33
Number of pages6
ISBN (Electronic)9781713832621
StatePublished - 2021
Externally publishedYes
Event20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021 - Virtual, Online
Duration: 3 May 20217 May 2021

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume1
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

Conference20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
CityVirtual, Online
Period3/05/217/05/21

Bibliographical note

Publisher Copyright:
© 2021 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.

Funding

The authors thank Scott Niekum and Aaron Steinfeld for helpful discussions on the ideas presented in this paper. This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lock-heed Martin, GM, and Bosch. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. The authors thank Scott Niekum and Aaron Steinfeld for helpful discussions on the ideas presented in this paper. This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research.

FundersFunder number
FLIRFP2-000
National Science FoundationIIS-1724157, NRI-1925082, CPS-1739964
Office of Naval ResearchN00014-18-2243
Army Research OfficeW911NF-19-2-0333
Defense Advanced Research Projects Agency
University of Texas at Austin
Robert Bosch (Australia) Pty

    Keywords

    • Automated care
    • Grand challenge
    • Service robots
    • Surrogacy

    Fingerprint

    Dive into the research topics of 'The seeing-eye robot grand challenge: Rethinking automated care'. Together they form a unique fingerprint.

    Cite this