Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    Assessment of Engagement for Intelligent Educational Agents: A Pilot Study with Middle School Students
    (Georgia Institute of Technology, 2014) Brown, LaVonda ; Howard, Ayanna M.
    Adaptive learning is an educational method that utilizes computers as an interactive teaching device. Intelligent tutoring systems, or educational agents, use adaptive learning techniques to adapt to each student’s needs and learning styles in order to individualize learning. Effective educational agents should accomplish two essential goals during the learning process – 1) monitor engagement of the student during the interaction and 2) apply behavioral strategies to maintain the student’s attention when engagement decreases. In this paper, we focus on the first objective of monitoring student engagement. Most educational agents do not monitor engagement explicitly, but rather assume engagement and adapt their interaction based on the student’s responses to questions and tasks. A few advanced methods have begun to incorporate models of engagement through vision-based algorithms that assess behavioral cues such as eye gaze, head pose, gestures, and facial expressions. Unfortunately, these methods typically require a heavy computation load, memory/storage constraints, and high power consumption. In addition, these behavioral cues do not correlate well with achievement of highlevel cognitive tasks. As an alternative, our proposed model of engagement uses physical events, such as keyboard and mouse events. This approach requires fewer resources and lower power consumption, which is also ideally suited for mobile educational agents such as handheld tablets and robotic platforms. In this paper, we discuss our engagement model which uses techniques that determine behavioral user state and correlate these findings to mouse and keyboard events. In particular, we observe three event processes: total time required to answer a question; accuracy of responses; and proper function executions. We evaluate the correctness of our model based on an investigation involving a middle-school after-school program in which a 15-question math exam that varies in cognitive difficulty is used for assessment. Eye gaze and head pose techniques are referenced for the baseline metric of engagement. We conclude the investigation with a survey to gather the subject’s perspective of their mental state after the exam. We found that our model of engagement is comparable to the eye gaze and head pose techniques for low-level cognitive tasks. When high-level cognitive thinking is required, our model is more accurate than the eye gaze and head pose techniques due to the students’ nonfocused gazes during questions requiring deep thought or use of outside variables for assistance such as their fingers to count. The large time delay associated with the lack of eye contact between the student and the computer screen causes the aforementioned algorithms to incorrectly declare the subjects as being disengaged. Furthermore, speed and validity of responses can help to determine how well the student understands the material, and this is confirmed through the survey responses and video observations. This information will be used later to integrate instructional scaffolding and adaptation with the educational agent.
  • Item
    Terrain Reconstruction of Glacial Surfaces via Robotic Surveying Techniques
    (Georgia Institute of Technology, 2012-12) Williams, Stephen ; Parker, Lonnie T. ; Howard, Ayanna M.
    The capability to monitor natural phenomena using mobile sensing is a benefit to the Earth science community given the potentially large impact that we, as humans, can have on naturally occurring processes. Observable phenomena that fall into this category of interest range from static to dynamic in both time and space (i.e. temperature, humidity, and elevation). Such phenomena can be readily monitored using networks of mobile sensor nodes that are tasked to regions of interest by scientists. In our work, we hone in on a very specific domain, elevation changes in glacial surfaces, to demonstrate a concept applicable to any spatially distributed phenomena. Our work leverages the sensing of a vision-based SLAM odometry system and the design of robotic surveying navigation rules to reconstruct scientific areas of interest, with the goal of monitoring elevation changes in glacial regions. We validate the output from our methodology and provide results that show the reconstructed terrain error complies with acceptable mapping standards found in the scientific community.
  • Item
    Using Haptic and Auditory Interaction Tools to Engage Students with Visual Impairments in Robot Programming Activities
    (Georgia Institute of Technology, 2012-01) Howard, Ayanna M. ; Park, Chung Hyuk ; Remy, Sekou
    The robotics field represents the integration of multiple facets of computer science and engineering. Robotics-based activities have been shown to encourage K-12 students to consider careers in computing and have even been adopted as part of core computer-science curriculum at a number of universities. Unfortunately, for students with visual impairments, there are still inadequate opportunities made available for teaching basic computing concepts using robotics-based curriculum. This outcome is generally due to the scarcity of accessible interfaces to educational robots and the unfamiliarity of teachers with alternative (e.g., nonvisual) teaching methods. As such, in this paper, we discuss the use of alternative interface modalities to engage students with visual impairments in robotics-based programming activities. We provide an overview of the interaction system and results on a pilot study that engaged nine middle school students with visual impairments during a two-week summer camp.
  • Item
    An Integrated Sensing Approach for Entry, Descent, and Landing of a Robotic Spacecraft
    (Georgia Institute of Technology, 2011-01) Howard, Ayanna M. ; Jones, Brandon M. ; Serrano, Navid
    We present an integrated sensing approach for enabling autonomous landing of a robotic spacecraft on a hazardous terrain surface; this approach is active during the spacecraft descent profile. The methodology incorporates an image transformation algorithm to interpret temporal imagery land data, perform real-time detection and avoidance of terrain hazards that may impede safe landing, and increase the accuracy of landing at a desired site of interest using landmark localization techniques. By integrating a linguistic rule-based engine with linear algebra and computer vision techniques, the approach suitably addresses inherent uncertainty in the hazard assessment process while ensuring computational simplicity for real-time implementation during spacecraft descent. The proposed approach is able to identify new hazards as they emerge and also remember the locations of past hazards that might impede spacecraft landing. We provide details of the methodology in this paper and present simulation results of the approach applied to a representative Mars landing descent profile.
  • Item
    Calibration and Validation of Earth-Observing Sensors Using Deployable Surface-Based Sensor Networks
    (Georgia Institute of Technology, 2010-12) Williams, Stephen ; Parker, Lonnie T. ; Howard, Ayanna M.
    Satellite-based instruments are now routinely used to map the surface of the globe or monitor weather conditions. However, these orbital measurements of ground-based quantities are heavily influenced by external factors, such as air moisture content or surface emissivity. Detailed atmospheric models are created to compensate for these factors, but the satellite system must still be tested over a wide variety of surface conditions to validate the instrumentation and correction model. Validation and correction are particularly important for arctic environments, as the unique surface properties of packed snow and ice are poorly modeled by any other terrain type. Currently, this process is human intensive, requiring the coordinated collection of surface measurements over a number of years. A decentralized, autonomous sensor network is proposed which allows the collection of ground-based environmental measurements at a location and resolution that is optimal for the specific on-orbit sensor under investigation. A prototype sensor network has been constructed and fielded on a glacier in Alaska, illustrating the ability of such systems to properly collect and log sensor measurements, even in harsh arctic environments.
  • Item
    Developing Monocular Visual Odometry and Pose Estimation for Arctic Environments
    (Georgia Institute of Technology, 2010-03) Williams, Stephen ; Howard, Ayanna M.
    Arctic regions present one of the harshest environments on Earth for people or mobile robots, yet many important scientific studies, particularly those involving climate change, require measurements from these areas. For the successful deployment of mobile sensors in the Arctic, a high-quality localization system is required. Although a global positioning system can provide coarse positioning (within several meters), it cannot provide any orientation information. A single-camera-pose-estimation system is presented, based on visual odometry techniques, which is capable of operating in the feature-poor environments of the Arctic. To validate the system, a prototype rover was developed and fielded on a glacier in Alaska. The resulting pose estimates compare favorably to values obtained by hand registering the same video sequence. Although pose errors do accumulate over time, these errors are typical of a standard odometry system but obtained in an environment where standard odometry is not practical.