welcome to ROBOT JAM !

FJA@ROMAN2014 > Program

 

 

 

9:00 – 9:10 Welcome

 

Psychological and philosophical foundations of joint action

9:10 – 9:50

Cognitive Mechanisms Supporting Human Joint Action

Cordula Vesper

Department of Cognitive Science, Central European University, Budapest, Hungary

How do people perform actions together such as lifting a heavy box, playing ensemble music or building a complex object (such as a brick tower) together? A variety of recent empirical studies in psychology, cognitive science and neuroscience shows that planning and performing joint actions is based on specific cognitive mechanisms that support coordination with other people. These include representing others’ actions, predicting and monitoring what others do and adapting own actions in ways that facilitate interpersonal coordination. In this talk, I will discuss these mechanisms as well as supporting evidence to provide a short overview about the state-of-the-art of human joint action in order to identify links to human-robot interaction.

 

9:50 – 10:30

Levels of coordination in joint action

Elisabeth Pacherie

Institut Jean Nicod, CNRS-EHESS-ENS, Paris, France

I describe and motivate a dynamic hierachical model of intention and action specification and use it to propose an analysis of the different levels of coordination involved in joint action and of the means at our disposal to achieve coordination.

 

10:30 – 11:00 Coffee break

 

Joint action in robotics

 

11:00-11:30

On Human-Aware Planning Abilities for a teammate robot

Rachid Alami

CNRS-LAAS, Université de Toulouse, Toulouse, France

In this talk, I will consider several key decisional issues that are necessary for a cognitive robot which shares space and tasks with a human. We have adopted a constructive approach based on the identification and the effective implementation of a set of decisional processes. I will first give a broad view of a conceptual robot control architecture specially designed to provide a framework for these decisional processes. These abilities include geometric reasoning and situation assessment  based essentially on perspective-taking and affordances, management  and exploitation of each agent (human and robot) knowledge in a  separate cognitive model, human-aware task and motion planning and  human and robot interleaved plan achievement.

 

11:30-12:00

Human-Robot Team Training

Julie Shah

Massachusetts Institute of Technology

In this talk, I discuss the design of new models for robot planning, which use insights and data derived from the planning and execution strategies employed by successful human teams, to support more seamless robot participation in human work practices. This includes models for human-robot team training, which involves hands-on practice to clarify sequencing and timing of actions, and for team planning, which includes communication to negotiate and clarify allocation and sequencing of work. The aim is to support both the human and robot workers in co-developing a common understanding of task responsibilities and information requirements, to produce more effective human-robot partnerships.

 

12:00-12:30

Discussion

 

12:30 - 13:30 Lunch break

Contributions

 

13:30-13:45

(How) Can Robots Make Commitments ? -- A pragmatic Approach

John Michael (Central European University  (CEU)), Alessandro Salice (Copenhagen University  (KU))

Commitment is a fundamental building block of social reality. In particular, commitments seem to play a fundamental role in human social interaction. In this paper, we discuss the possibility of designing robots that engage in commitments, are motivated to honor commitments, and expect others also to be so motivated. We identify several challenges that such a project would likely confront, and consider possibilities for meeting these challenges.


13:45-14:00

Human Robot Collaboration in production environments

Christoph Strassmair (Heriot-Watt University (UNITED KINGDOM) and Technische Hochschule Ingolstadt  (THI))Nick Taylor (Technische Hochschule Ingolstadt  (THI))

Human Robot Collaboration has great potential for the manufacturing domain. A necessary precondition for this type of Joint Action is however the human's acceptance of their robotic co-worker. Recent research indicates that this acceptance is heavily influenced by the flexibility granted to the worker and the efficiency of the collaboration. Current systems aim for efficiency but neglect the spatial constraints of the collaboration. Furthermore they restrict the worker's flexibility.
This paper presents an approach that extends prior art by considering spatial constraints and granting more flexibility to the worker. It thus facilitates a more efficient collaboration and a higher worker acceptance in manufacturing environments. The application of the approach to the workshop example is sketched.

 

14:00-14:15

Role Distribution in Synchronous Human-Robot Joint Action

Tariq Iqbal, Laurel Riek (Department of Computer Science and Engineering, University of Notre Dame)

Robots are becoming a part of our daily lives, and humans and robots are beginning to work together as teams to achieve common goals. In a human-robot interaction (HRI) scenario, it is important to assign roles to both the human and the robot, as these role assignments may affect the fluidity and the effectiveness of the interaction. In this paper, we discuss different role distribution models to assign roles among humans and robots in the context of synchronous joint action. We employed the leader-follower model in a human-robot collaborative task using a method from our previous work to detect synchronous actions of team members. Our results support our choice of the leader-follower model, and suggest that our method is capable of measuring synchronous joint action in an HRI scenario. These results are encouraging for future work aimed at the development of adept human-robot teamwork.

 

14:15-14:30

Joint action and joint attention: what could be the interface between developmental psychology and robotics to better frame human-robot joint activity

Michèle Guidetti (Unité de Recherche Interdisciplinaire Octogone (EA 4156)  (URI Octogone), Université Toulouse le Mirail - Toulouse II)

The aim of the proposed talk is to present the point of view on joint attention and joint action of a developmental psychologist interested by normal pathological comparisons and believing that “development is the key to understanding developmental disorders” (Karmiloff-Smith, 1998), knowing nothing about robots but interested by exchanges and collaborations with roboticists in order to better frame human-robot joint activity.

 

14:30-14:45

Language Use in Joint Action: Demonstratives and Nouns used in Japanese Conversation Doing a Collaborative Task

Harumi Kobayashi (Tokyo Denki university), Tetsuya Yasuda (Jumonji University), Hiroshi Igarashi, Satoshi Suzuki (Tokyo Denki university)

English demonstratives “this” “that” “here” “there” are used to quickly and exactly establish joint attention on some objects in the environment with other people. People may say, “Look at this,” or “I want to ride on that bus,” to jointly look at the same object with one or more people. Interestingly, in Japanese, there are no articles, so it is completely grammatical to say nouns without articles. If someone wants to establish joint attention on a specific clock, the person must use a demonstrative such as “Kono tokei wo kaitai (I want to buy this clock”). In this study, we focused on the use of Japanese demonstratives such as here (in Japanese, “koko”), there (“asoko”), this (“kore,” “kono”), that (“sore,” “sono,” “are,” “ano”) using corpus data and analyzed conversation to examine whether the use of these words change when people execute a collaborative task ten times. We used virtual space using computer monitors. There were three adult volunteers in each group and, there were four such groups. The task was to collaboratively move boxes to predetermined target panels in a virtual space. We extracted demonstratives and nouns and examined the use of these words over ten trials. The results were that the ratio of nouns decreased whereas the ratio of demonstratives increased in three of four groups' data. People may regulate joint actions by effective use of demonstratives when they cooperate with other people in addition to actions. The study also suggested that demonstrative “this” might be a powerful tool to show ones own will to execute the next task. By using the demonstrative “THIS,” robots and humans may be able to quickly communicate who will move the next block in the workshop example.

 

14:45-15:00

Movement coordination in repetitive joint action tasks: Considerations on human-human and human-robot interaction

Tamara Lorenz (Technische Universität München (TUM), Ludwig-Maximilians-Universität [München]  (LMU)), Sandra Hirche (Technische Universität München  (TUM))
 
It is a common idea that robotic design for human-robot interaction can benefit from approaches taken from human joint action research. In this position paper a pre-defined example is operationalized to shed light on recent findings in human movement coordination and turn-taking in repetitive tasks. Both topics are also considered for human-robot interaction. The paper closes with a discussion on open questions and unsolved problems that are not considered in human psychological research but play a major role when a transfer to robotics is intended.

 

15:00 – 15:30 Coffee break

 

15:30-15:45

Using Virtual Characters in the Study of Mimicry and Joint Action

Xueni Pan, Harry Farmer, Joanna Hale, Antonia Hamilton (Institute of Cognitive Neuroscience, UCL)

In recent years, the use of Virtual Characters in experimental studies has opened new research venues in social psychology and neuroscience. In this paper, we review the literature on the use of Virtual Reality in the study of joint action and mimicry, and then present the design and implementation of three case studies in this area using Virtual Characters. Our preliminary results supported the hypothesis that the congruent and mimicry effect exists in the interaction between human-participants and Virtual Characters. Finally, we discuss different types of co-actions as well as how the benefit of using Virtual Reality in this area.

 

15:45-16:00

Exploring Joint Action for Alternative Finding: Proposal of a Human-Human Study to Inform Human-Robot Collaboration

Astrid Weiss, Markus Vincze (Automation an Control Institute, Vienna University Of Technology  (ACIN, TUW) )

This paper proposes a study in order to explore how human collaborators monitor the perceptual regions in unsolvable task situations (e.g. the right piece cannot be found in order to build the tower together) and how the collaborators (in our case a director and a builder) use the information in the course of collaboration. The interesting aspect of this study is to see which grounding techniques participants use to identify that the task is unsolvable (i.e. needs an alternative solution) and which of these techniques can be transferred to Human-Robot Collaboration. We propose human-robot grounding performed by the human, meaning to enable the robot to “explain itself” in unsolvable task situations, therefore that the human can establish a common ground and find an informed alternative solution. In this position paper we present the study design and preliminary results, as we are currently in the middle of the data analysis.

 

16:00-17:00

General Discussion

Discussion involving all speakers and workshop attendants

 

 

 

Online user: 1