People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people’s navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people’s navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.
|Title of host publication
|Social Robotics - 12th International Conference, ICSR 2020, Proceedings
|Alan R. Wagner, David Feil-Seifer, Kerstin S. Haring, Silvia Rossi, Thomas Williams, Hongsheng He, Shuzhi Sam Ge
|Springer Science and Business Media Deutschland GmbH
|Number of pages
|Published - 2020
|12th International Conference on Social Robotics, ICSR 2020 - Golden, United States
Duration: 14 Nov 2020 → 18 Nov 2020
|Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|12th International Conference on Social Robotics, ICSR 2020
|14/11/20 → 18/11/20
Bibliographical notePublisher Copyright:
© 2020, Springer Nature Switzerland AG.
- Human-robot interaction
- Social navigation