Abstract
People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people’s navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people’s navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.
Original language | English |
---|---|
Title of host publication | Social Robotics - 12th International Conference, ICSR 2020, Proceedings |
Editors | Alan R. Wagner, David Feil-Seifer, Kerstin S. Haring, Silvia Rossi, Thomas Williams, Hongsheng He, Shuzhi Sam Ge |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 320-331 |
Number of pages | 12 |
ISBN (Print) | 9783030620554 |
DOIs | |
State | Published - 2020 |
Externally published | Yes |
Event | 12th International Conference on Social Robotics, ICSR 2020 - Golden, United States Duration: 14 Nov 2020 → 18 Nov 2020 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12483 LNAI |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 12th International Conference on Social Robotics, ICSR 2020 |
---|---|
Country/Territory | United States |
City | Golden |
Period | 14/11/20 → 18/11/20 |
Bibliographical note
Publisher Copyright:© 2020, Springer Nature Switzerland AG.
Funding
This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. Studies in this work were approved under University of Texas at Austin IRB study numbers 2015-06-0058 and 2019-03-0139. Acknowledgments. This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. Studies in this work were approved under University of Texas at Austin IRB study numbers 2015-06-0058 and 2019-03-0139.
Funders | Funder number |
---|---|
FLI | RFP2-000 |
National Science Foundation | IIS-1724157, NRI-1925082, CPS-1739964 |
Office of Naval Research | N00014-18-2243 |
Army Research Office | W911NF-19-2-0333 |
Defense Advanced Research Projects Agency | |
University of Texas at Austin | |
Robert Bosch | 2015-06-0058, 2019-03-0139 |
Robert Bosch (Australia) Pty |
Keywords
- Gaze
- Human-robot interaction
- Social navigation