Layer Based Auditory Displays Of Robots’ Actions And Intentions

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract:  Unintentional encounters between robots and humans will increase in the future and require concepts for communicating the robots’ internal states. Auditory displays can be used to convey the relevant information to people who share public spaces with social robots. Based on data gathered in a participatory design workshop with robot experts, a layer based approach for real-time generated audio feedback is introduced, where the information to be displayed is mapped to certain audio parameters. First exploratory sound designs were created and evaluated in an online study. The results show which audio parameter mappings should be examined further to display certain internal states, like e.g. mapping amplitude modulation to the robot’s speed or enhancing alarm frequencies for indicating urgent tasks. Features such as speed, urgency and large size were correctly identified in more than 50% of evaluations, while information about the robot’s interactivity or its small size were not comprehensible to the participants. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)