Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    Gestural Behavioral Implementation on a Humanoid Robotic Platform for Effective Social Interaction
    (Georgia Institute of Technology, 2014-08) Brown, LaVonda ; Howard, Ayanna M.
    The role of emotions in social scenarios is to provide an inherent mode of communication between two parties. When emotions are properly employed and understood, people are able to respond appropriately, which further enhances the social interaction. Ultimately, effective emotion execution in social settings has the capability to build rapport, improve engagement, optimize learning, provide comfort, and increase overall likability. In this paper, we discuss associating dominant emotions of effective social interaction to gestural behaviors on a humanoid robotic platform. Studies with 13 participants interacting with the robot show that by integrating key principles related to the characteristics of happy and sad emotions, the intended emotion is perceived across all participants with 95.19% and 94.23% sensitivity, respectively.
  • Item
    A Real-Time Model to Assess Student Engagement During Interaction with Intelligent Educational Agents
    (Georgia Institute of Technology, 2014-06) Brown, LaVonda ; Howard, Ayanna M.
    Adaptive learning is an educational method that utilizes computers as an interactive teaching device. Intelligent tutoring systems, or educational agents, use adaptive learning techniques to adapt to each student’s needs and learning styles in order to individualize learning. Effective educational agents should accomplish two essential goals during the learning process – 1) monitor engagement of the student during the interaction and 2) apply behavioral strategies to maintain the student’s attention when engagement decreases. In this paper, we focus on the first objective of monitoring student engagement. Most educational agents do not monitor engagement explicitly, but rather assume engagement and adapt their interaction based on the student’s responses to questions and tasks. A few advanced methods have begun to incorporate models of engagement through vision-based algorithms that assess behavioral cues such as eye gaze, head pose, gestures, and facial expressions. Unfortunately, these methods require a heavy computation load, memory/storage constraints, and high power consumption. In addition, these behavioral cues do not correlate well with achievement of high-cognitive tasks, as we will discuss in this paper. As an alternative, our proposed model of engagement uses physical events, such as keyboard and mouse events. This approach requires fewer resources and lower power consumption, which is also ideally suited for mobile educational agents such as handheld tablets and robotic platforms. In this paper, we discuss our engagement model which uses techniques that determine behavioral user state and correlate these findings to mouse and keyboard events. In particular, we observe three event processes: total time required to answer a question; accuracy of responses; and proper function executions. We evaluate the correctness of our model based on an investigation involving a middle-school after-school program in which a 15-question math exam that varies in cognitive difficulty is used for assessment. Eye gaze and head pose techniques are referenced for the baseline metric of engagement. We then conclude the investigation with a survey to gather the subject’s perspective of their mental state throughout the exam. We found that our model of engagement is comparable to the eye gaze and head pose techniques. When high-level cognitive thinking is required, our model is more accurate than the eye gaze and head pose techniques due to the use of outside variables for assistance and non-focused gazes during questions requiring deep thought. The large time delay associated with the lack of eye contact between the student and the computer screen causes the aforementioned algorithms to incorrectly declare the subjects as being disengaged. Furthermore, speed and validity of responses can help to determine how well the student understands the material, and this is confirmed through the survey responses and video observations. This additional information will be used in the future to better integrate instructional scaffolding and adaptation with the educational agent.
  • Item
    Assessment of Engagement for Intelligent Educational Agents: A Pilot Study with Middle School Students
    (Georgia Institute of Technology, 2014) Brown, LaVonda ; Howard, Ayanna M.
    Adaptive learning is an educational method that utilizes computers as an interactive teaching device. Intelligent tutoring systems, or educational agents, use adaptive learning techniques to adapt to each student’s needs and learning styles in order to individualize learning. Effective educational agents should accomplish two essential goals during the learning process – 1) monitor engagement of the student during the interaction and 2) apply behavioral strategies to maintain the student’s attention when engagement decreases. In this paper, we focus on the first objective of monitoring student engagement. Most educational agents do not monitor engagement explicitly, but rather assume engagement and adapt their interaction based on the student’s responses to questions and tasks. A few advanced methods have begun to incorporate models of engagement through vision-based algorithms that assess behavioral cues such as eye gaze, head pose, gestures, and facial expressions. Unfortunately, these methods typically require a heavy computation load, memory/storage constraints, and high power consumption. In addition, these behavioral cues do not correlate well with achievement of highlevel cognitive tasks. As an alternative, our proposed model of engagement uses physical events, such as keyboard and mouse events. This approach requires fewer resources and lower power consumption, which is also ideally suited for mobile educational agents such as handheld tablets and robotic platforms. In this paper, we discuss our engagement model which uses techniques that determine behavioral user state and correlate these findings to mouse and keyboard events. In particular, we observe three event processes: total time required to answer a question; accuracy of responses; and proper function executions. We evaluate the correctness of our model based on an investigation involving a middle-school after-school program in which a 15-question math exam that varies in cognitive difficulty is used for assessment. Eye gaze and head pose techniques are referenced for the baseline metric of engagement. We conclude the investigation with a survey to gather the subject’s perspective of their mental state after the exam. We found that our model of engagement is comparable to the eye gaze and head pose techniques for low-level cognitive tasks. When high-level cognitive thinking is required, our model is more accurate than the eye gaze and head pose techniques due to the students’ nonfocused gazes during questions requiring deep thought or use of outside variables for assistance such as their fingers to count. The large time delay associated with the lack of eye contact between the student and the computer screen causes the aforementioned algorithms to incorrectly declare the subjects as being disengaged. Furthermore, speed and validity of responses can help to determine how well the student understands the material, and this is confirmed through the survey responses and video observations. This information will be used later to integrate instructional scaffolding and adaptation with the educational agent.
  • Item
    Engaging Children in Play Therapy: The Coupling of Virtual Reality (VR) Games With Social Robotics
    (Georgia Institute of Technology, 2014-01) García-Vergara, Sergio ; Brown, LaVonda ; Park, Hae Won ; Howard, Ayanna M.
    Individuals who have impairments in their motor skills typically engage in rehabilitation protocols to improve the recovery of their motor functions. In general, engaging in physical therapy can be tedious and difficult, which can result in demotivating the individual. This is especially true for children who are more susceptible to frustration. Thus, different virtual reality environments and play therapy systems have been developed with the goal of increasing the motivation of individuals engaged in physical therapy. However, although previously developed systems have proven to be effective for the general population, the majority of these systems are not focused on engaging children. Given this motivation, we discuss two technologies that have been shown to positively engage children who are undergoing physical therapy. The first is called the Super Pop VR™ game; a virtual reality environment that not only increases the child’s motivation to continue with his/her therapy exercises, but also provides feedback and tracking of patient performance during game play. The second technology integrates robotics into the virtual gaming scenario through social engagement in order to further maintain the child’s attention when engaged with the system. Results from preliminary studies with typically-developing children have shown their effectiveness. In this chapter, we discuss the functions and advantages of these technologies, and their potential for being integrated into the child’s intervention protocol.
  • Item
    Engaging children in math education using a socially interactive humanoid robot
    (Georgia Institute of Technology, 2013-10) Brown, LaVonda ; Howard, Ayanna M.
    Studies have shown that teaching processes, which incorporate robotic-based engagement methods, can approach the effectiveness of human tutors. Not only have these socially engaging robots been used in education, but also as weightloss coaches, play partners, and companions. As such, in this paper we investigate the process of embedding social interaction within a humanoid-student learning scenario in order to reengage children during high-demand cognitive tasks.We discuss the overall system approach and the forms of multi-modal verbal and nonverbal (i.e. gestural) cues used by the robotic agent. Results derived from 20 children, age 13 through 18, engaging with the robot during a tablet-based algebra exam show that, while various forms of social interaction increase test performance, combinations of verbal cues result in a slightly better outcome with respect to test completion time.
  • Item
    Applying behavioral strategies for student engagement using a robotic educational agent
    (Georgia Institute of Technology, 2013-10) Brown, LaVonda ; Kerwin, Ryan ; Howard, Ayanna M.
    Adaptive learning is an educational method that utilizes computers as an interactive teaching device. Intelligent tutoring systems, or educational agents, use adaptive learning techniques to adapt to each student’s needs and learning styles in order to individualize learning. Effective educational agents should accomplish two essential goals during the learning process – 1) monitor engagement of the student during the interaction and 2) apply behavioral strategies to maintain the student’s attention when engagement decreases. This paper focuses on the second objective of reengaging students using various behavioral strategies through the utilization of a robotic educational agent. Details are provided on the overall system approach and the forms of verbal and nonverbal cues used by the robotic agent. Results derived from 24 students engaging with the robot during a computer-based math test show that, while various forms of behavioral strategies increase test performance, combinations of verbal cues result in a slightly better outcome.