FJA@ICSR2019 > Invited Speakers


Interactive Robots and Media Lab (IRML) 

TITLE: From Symbol Grounding, Situation Models, and latest state-of-the-art tools, towards the next levels of fluid Human-Robot Joint Action.

Towards the next generation of collaborative robots (CoBots), which will be able to engage in a wide variety of tasks alongside humans (and potentially also other robots), in environments which are not just industrial but move over also to public, personal, and natural spaces, it is important for the Robots to be equipped with capabilities necessary for fluid Human-Robot Joint Action. Beyond the traditional dimensional decompositions of such abilities, such as for example the three axis of Motor Precision / Coordination / Anticipatory Planning (Cochet and Guidetti, 2018), there are a number of philosophical and psychological concepts that play a central role in most actual or projected real-world systems that aim towards supporting fluid Human-Robot Joint Action. Two such concepts are “symbol grounding” (Harnad, 1990) as well as “situation models” (as per Zwaan’s work), which have been accounted for in just a few computational implementations which were embodied in interactive robots. These concepts though, through their accommodation of important aspects of second-order beliefs and second-order percepts and their intricate relation to theory-of-mind (as per Wellmann, 1992), and especially when coupled with joint attention and communication mechanisms, as well as with generalized notions such as mixed speech act/motor act planning, can play a very important role towards fluid joint action. In this talk, starting by introducing these concepts and discussing relevant existing work and systems, we will then provide a plausible roadmap towards the next levels of fluid Human-Robot Joint Action, and also discuss where latest developments in relevant state-of-the-art theories and tools might not only be able to fill existing gaps but also to provide future performance that might surpass human capabilities.


Dr. Nikolaos Mavridis, PhD from the Massachusetts Institute of Technology in 2007, is an academic and consultant specializing in Intelligent Robotics. He has served in various academic positions, including NYU Poly and AD, Innopolis University, and UAEU, all the way up to the rank of Full Professor, and is the founding director of the Interactive Robots and Media Lab (IRML). Nikolaos is a member of the MIT Educational Council, a mentor for the MIT Enterprise Forum. He has been a four-time TEDx speaker, as well as a speaker at Singularity University, and is also a pro-bono contributor to a number of organizations.

Lucia Maria SACHELI

University of Milano-Bicocca, Milano.

TITLE: Motor representations in motor interactions and their role in signaling strategies


What tells apart interactive from non-interactive actions? Are the same cognitive mechanisms responsible for coding the observed movements of others, independently of whether we need to coordinate with them? The clear-cut evidence is lacking on what singles out the perception of the actions of an interactive partner. Similarly, it is unclear whether the social context (e.g., interactive vs. non-interactive) modulates the cognitive mechanisms controlling one’s motor behavior. Nevertheless, a detailed description of what specific cognitive features characterize collaborative motor exchanges might facilitate the detection of what differentiates “successful” vs. “unsuccessful” interactions under the cognitive point of view, providing possibly useful insights on how to model the former and implement them with artificial agents.

In a series of experiments, we demonstrated that what characterizes motor planning during collaborative interactions is the ability to integrate (predictions about) the partner’s behavior in a unitary motor representation that we called a “Dyadic Motor Plan (DMP)”. DMPs describe the specific common goal that interactive partners have to achieve together (e.g., the melody that we aim to play), and the most likely contribution that each of them is expected to provide in order to achieve such a common goal (e.g., the specific notes that each of us will play to create the melody, and the movements that we will perform in order to produce them). On the one hand, we proved that the presence of DMPs prune our prediction space, making the partner’s actions more easily predictable and the interaction more efficient. On the other hand, I will discuss the hypothesis that DMPs also support the ability to understand what information the partner might lack and thus need to fulfill the task more efficiently. As such, DMPs would enable agents to implement “helping” motor behaviors aimed to facilitate the partner’s task, e.g., by implementing sensorimotor communication. 


Lucia Maria Sacheli is an Experimental Psychologist and works as a Post-Doc Researcher at the University of Milano-Bicocca, Italy. She obtained her Ph.D. in Cognitive, Social and Affective Neuroscience in 2013 at the Sapienza University of Rome working on joint action. Her research interests revolve around the cognitive processes supporting interpersonal coordination during collaborative interactions, and their modulations due to social and emotional factors. She is also interested in studying the development of the neurocognitive bases of motor interactions along the whole life-span, from early childhood to aging.


The University of Warwick

TITLE: The Sense of Commitment in Human-Robot Interaction


In this talk I spell out the rationale for developing means of manipulating and of measuring people’s sense of commitment to robot interaction partners. A sense of commitment may lead people to be patient when a robot is not working smoothly, to remain vigilant when a robot is working so smoothly that a task becomes boring and to increase their willingness to invest effort in teaching a robot. I then present a study in which we tested the hypothesis that if a robot invests physical effort in adapting to a human partner, the human partner will reciprocate by investing more effort and patience in interacting with the robot. To test this hypothesis, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the high effort condition of the robot teaching phase, the iCub slowed down its movements when repeating a demonstration for the human learner, whereas in the low effort condition it sped the movements up when repeating the demonstration. In a subsequent participant teaching phase, human participants were asked to give the robot a demonstration, and then to repeat it if the robot had not understood. The results reveal that in the high effort condition, participants invested more effort to slow down their movements and to increase segmentation when repeating their demonstration. This was especially true when participants experienced the high effort condition after the low effort condition rather than before, indicating that participants were particularly sensitive to the change in the iCub’s level of commitment over the course of the experiment.


John Michael completed his PhD in philosophy at the University of Vienna in 2010, and subsequently worked as a Postdoc at Aarhus University, Copenhagen University and the Central European University in Budapest. He is currently Associate Professor of Philosophy at the University of Warwick and Affiliated Researcher the Department of Cognitive Science of the Central European University. His research interests include the sense of commitment, self-control, cooperation and joint action. He currently holds an ERC starting grant investigating the sense of commitment in joint action.


Online user: 18