Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Joint attention in human-robot interaction
    (Georgia Institute of Technology, 2010-07-07) Huang, Chien-Ming
    Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
  • Item
    Task transparency in learning by demonstration : gaze, pointing, and dialog
    (Georgia Institute of Technology, 2010-07-07) dePalma, Nicholas Brian
    This body of work explores an emerging aspect of human-robot interaction, transparency. Socially guided machine learning has proven that highly immersive robotic behaviors have yielded better results than lesser interactive behaviors for performance and shorter training time. While other work explores this transparency in learning by demonstration using non-verbal cues to point out the importance or preference users may have towards behaviors, my work follows this argument and attempts to extend it by offering cues to the internal task representation. What I show is that task-transparency, or the ability to connect and discuss the task in a fluent way implores the user to shape and correct the learned goal in ways that may be impossible by other present day learning by demonstration methods. Additionally, some participants are shown to prefer task-transparent robots which appear to have the ability of "introspection" in which it can modify the learned goal by other methods than just demonstration.