FJA@ICSR2019 > Discussion Panel 2

Dicussion Session

What happens when things go wrong 


Objectives and Motivating Questions

Joint actions demand the deployment of a wide variety of cognitive elements including  possessing the appropriate motivation to interact and remain engaged in the interaction, the capacity for coordinating behavior to bring about a particular result, different skills for anticipating the partner’s course of behavior or the expected results of the combined actions, or, of course, different psychological mechanisms to reliably understand others’ intentions. This amount of interlocking elements makes joint action a complex phenomenon. This complexity can, as a result, translate into the appearance of different types of failures or errors which, at the same time, can lead to the complete failure of the joint activity. In a nutshell, during joint activities, there are many different things that may go wrong. To give some examples,  both the robotic and the human agents can perform a wrong course of action or execute an unexpected behavior. Moreover, the human can possess unrealistic expectations about the robot’s capabilities that may result into wrong predictions or misunderstanding of its actions. Also, the robot can misunderstand humans social cues due to cultural variability or just a mere technical error (software or hardware). Finally, Such failures of coordination can also be reinforced by the human lack of motivation to interact with the robot either because she perceive it as untrustful or because of some unconscious negative attitudes toward the robotic agents.  

The general aim of this discussion is to explore different communicative strategies that humans and robots can exploit to better understand each other and to repair failures when the aforementioned problems arise. Some questions that may help us to achieve this general goal are: 

➔   What do partners need to understand about the task to detect failures?

➔   What human communicative signals should the robot understand? Only speech? Gazes? Pointing gestures? Hesitations?

➔   What communicative signals should the robot use to attract human attention to a failure?

➔   Do we must classify the different categories of failures?

➔   How to make the human being motivated to repair the failure?

➔   What communicative signals should the robot use to signify that it has spotted the failure and will fix it?

➔   How should the robot fix the failure? Should he negotiate with the human? Prevent it before acting? Divide the roles? Ask for the human to do it?

➔   Does the human have to repair the failure as soon as he notices it? Should he warn the robot? Explain his repair action?

➔   What happens if the robot does not understand the failure? Or is unable to fix it?

➔   What is needed in the robot behavior to avoid failures?

➔   How a robot should ask for help and when?



For the purpose of this session, consider again the scenario we proposed for the Session “Desiderata for Communication during Joint Action” where a human and a robot have the goal to assemble a piece of furniture. They should assemble the three parts (a, b, c) of the piece together using screws. The collective task requires the robot and the human to share a general goal (assemble the piece), deciding the order of sub-goals (Should we start assembling part a to b or c to A?), coordinate actions to perform certain sub-tasks (a subject must hold the part b while the other subject screw it into part c). Both agents have the pieces accessible in front of them and would participate to the task by assembling the piece of furniture. 


Now, consider the following eventualities that may appear during the interaction and the possible questions they may rise: 

  • The human and the robot have to decide how to start, should the robot proposes how? or wait for the human to start and act accordingly? if it decides to wait, should it communicate it? 
  • Should the human and the robot decide the roles of each partner in the task?
  • The robot picks part A and it expects the human to pick B because it is closer to him, Should the robot to signal the expectation? how?
  • Now, imagine that the human pick C instead of B. What the robots must do? Should it recalculate and change the course of action? Should it notice the error? If it does, how? If the robot decides to recalculate, should be able to communicate it to the human for justifying a plausible delay? 
  • The robot does not recognize human’s intention or action, how should it react? 
  • If the robot needs the human to hold part B to be able to screw it into part C, how should he communicate it? Should the robot ask for help or prefer another action to perform alone?
  • Does the robot have to understand other communicative signals than the verbal interventions of the human (e.g. request for help)? If so, which ones? 
  • If the human does not act, when should the robot intervene?



In order to make the panel more efficient, we would like to recommend to participants some readings that they may consider interesting and illuminating for the topic of the session


Honig, S. & Oron-Gilad, T. (2018). Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology, 9, 961. doi: 10.3389/fpsyg.2018.00861

Sources of Problems: 

Michael, J., & Pacherie, E. (2015). On commitments and other uncertainty reduction tools in joint action. Journal of Social Ontology, 1(1), 89-12

Recovering by asking for help: 

Knepper, R.A., Tellex, S., Li, A., Roy, N., & Rus, D. (2015) Recovering from failure by asking for help. Autonomous Robots. 39(3), 347-362.

 Bajones, M., Weiss, A., & Vincze, M. (2016). Help, Anyone? A user study for modeling robotic behavior to mitigate malfunctions with the help of the user. 5th International Symposium on New Frontiers in Human-Robot, Interaction.





Online user: 24