Person:
Howard, Ayanna M.

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 28
  • Item
    Assessment of Engagement for Intelligent Educational Agents: A Pilot Study with Middle School Students
    (Georgia Institute of Technology, 2014) Brown, LaVonda ; Howard, Ayanna M.
    Adaptive learning is an educational method that utilizes computers as an interactive teaching device. Intelligent tutoring systems, or educational agents, use adaptive learning techniques to adapt to each student’s needs and learning styles in order to individualize learning. Effective educational agents should accomplish two essential goals during the learning process – 1) monitor engagement of the student during the interaction and 2) apply behavioral strategies to maintain the student’s attention when engagement decreases. In this paper, we focus on the first objective of monitoring student engagement. Most educational agents do not monitor engagement explicitly, but rather assume engagement and adapt their interaction based on the student’s responses to questions and tasks. A few advanced methods have begun to incorporate models of engagement through vision-based algorithms that assess behavioral cues such as eye gaze, head pose, gestures, and facial expressions. Unfortunately, these methods typically require a heavy computation load, memory/storage constraints, and high power consumption. In addition, these behavioral cues do not correlate well with achievement of highlevel cognitive tasks. As an alternative, our proposed model of engagement uses physical events, such as keyboard and mouse events. This approach requires fewer resources and lower power consumption, which is also ideally suited for mobile educational agents such as handheld tablets and robotic platforms. In this paper, we discuss our engagement model which uses techniques that determine behavioral user state and correlate these findings to mouse and keyboard events. In particular, we observe three event processes: total time required to answer a question; accuracy of responses; and proper function executions. We evaluate the correctness of our model based on an investigation involving a middle-school after-school program in which a 15-question math exam that varies in cognitive difficulty is used for assessment. Eye gaze and head pose techniques are referenced for the baseline metric of engagement. We conclude the investigation with a survey to gather the subject’s perspective of their mental state after the exam. We found that our model of engagement is comparable to the eye gaze and head pose techniques for low-level cognitive tasks. When high-level cognitive thinking is required, our model is more accurate than the eye gaze and head pose techniques due to the students’ nonfocused gazes during questions requiring deep thought or use of outside variables for assistance such as their fingers to count. The large time delay associated with the lack of eye contact between the student and the computer screen causes the aforementioned algorithms to incorrectly declare the subjects as being disengaged. Furthermore, speed and validity of responses can help to determine how well the student understands the material, and this is confirmed through the survey responses and video observations. This information will be used later to integrate instructional scaffolding and adaptation with the educational agent.
  • Item
    Terrain Reconstruction of Glacial Surfaces via Robotic Surveying Techniques
    (Georgia Institute of Technology, 2012-12) Williams, Stephen ; Parker, Lonnie T. ; Howard, Ayanna M.
    The capability to monitor natural phenomena using mobile sensing is a benefit to the Earth science community given the potentially large impact that we, as humans, can have on naturally occurring processes. Observable phenomena that fall into this category of interest range from static to dynamic in both time and space (i.e. temperature, humidity, and elevation). Such phenomena can be readily monitored using networks of mobile sensor nodes that are tasked to regions of interest by scientists. In our work, we hone in on a very specific domain, elevation changes in glacial surfaces, to demonstrate a concept applicable to any spatially distributed phenomena. Our work leverages the sensing of a vision-based SLAM odometry system and the design of robotic surveying navigation rules to reconstruct scientific areas of interest, with the goal of monitoring elevation changes in glacial regions. We validate the output from our methodology and provide results that show the reconstructed terrain error complies with acceptable mapping standards found in the scientific community.
  • Item
    Using Haptic and Auditory Interaction Tools to Engage Students with Visual Impairments in Robot Programming Activities
    (Georgia Institute of Technology, 2012-01) Howard, Ayanna M. ; Park, Chung Hyuk ; Remy, Sekou
    The robotics field represents the integration of multiple facets of computer science and engineering. Robotics-based activities have been shown to encourage K-12 students to consider careers in computing and have even been adopted as part of core computer-science curriculum at a number of universities. Unfortunately, for students with visual impairments, there are still inadequate opportunities made available for teaching basic computing concepts using robotics-based curriculum. This outcome is generally due to the scarcity of accessible interfaces to educational robots and the unfamiliarity of teachers with alternative (e.g., nonvisual) teaching methods. As such, in this paper, we discuss the use of alternative interface modalities to engage students with visual impairments in robotics-based programming activities. We provide an overview of the interaction system and results on a pilot study that engaged nine middle school students with visual impairments during a two-week summer camp.
  • Item
    An Integrated Sensing Approach for Entry, Descent, and Landing of a Robotic Spacecraft
    (Georgia Institute of Technology, 2011-01) Howard, Ayanna M. ; Jones, Brandon M. ; Serrano, Navid
    We present an integrated sensing approach for enabling autonomous landing of a robotic spacecraft on a hazardous terrain surface; this approach is active during the spacecraft descent profile. The methodology incorporates an image transformation algorithm to interpret temporal imagery land data, perform real-time detection and avoidance of terrain hazards that may impede safe landing, and increase the accuracy of landing at a desired site of interest using landmark localization techniques. By integrating a linguistic rule-based engine with linear algebra and computer vision techniques, the approach suitably addresses inherent uncertainty in the hazard assessment process while ensuring computational simplicity for real-time implementation during spacecraft descent. The proposed approach is able to identify new hazards as they emerge and also remember the locations of past hazards that might impede spacecraft landing. We provide details of the methodology in this paper and present simulation results of the approach applied to a representative Mars landing descent profile.
  • Item
    Prospects of Implementing a Vhand Glove as a Robotic Controller
    (Georgia Institute of Technology, 2011) Chidi, Christopher ; Howard, Ayanna M.
    There are numerous approaches and systems for implementing a robot controller. This project investigates the potential of using the VHand Motion Capturing Glove, developed by DGTech, as a means of controlling a programmable robot. A GUI-based application was utilized to identify and subsequently reflect the extended or closed state of each finger on the glove hand. A calibration algorithm was implemented on the existing application source code in order to increase the precision of the recognition of extended or closed finger positions as well as enhance the efficiency of the hand signal interpretation. Furthermore, manipulations were made to the scan rate and sample size of the bit signal coming from the glove to improve the accuracy of recognizing dynamic hand signals or defined signals containing sequential finger positions. An attempt was made to sync the VHand glove signals to a Scribbler robot by writing the recognized hand signals to a text file which were simultaneously read by a Python-based application. The Python application subsequently transmitted commands to the Scribbler robot via a Bluetooth serial link. However, there was difficulty in achieving real-time communication between the VHand glove and the Scribbler robot, most likely due to unidentified runtime errors in the VHand signal interpretation code.
  • Item
    Calibration and Validation of Earth-Observing Sensors Using Deployable Surface-Based Sensor Networks
    (Georgia Institute of Technology, 2010-12) Williams, Stephen ; Parker, Lonnie T. ; Howard, Ayanna M.
    Satellite-based instruments are now routinely used to map the surface of the globe or monitor weather conditions. However, these orbital measurements of ground-based quantities are heavily influenced by external factors, such as air moisture content or surface emissivity. Detailed atmospheric models are created to compensate for these factors, but the satellite system must still be tested over a wide variety of surface conditions to validate the instrumentation and correction model. Validation and correction are particularly important for arctic environments, as the unique surface properties of packed snow and ice are poorly modeled by any other terrain type. Currently, this process is human intensive, requiring the coordinated collection of surface measurements over a number of years. A decentralized, autonomous sensor network is proposed which allows the collection of ground-based environmental measurements at a location and resolution that is optimal for the specific on-orbit sensor under investigation. A prototype sensor network has been constructed and fielded on a glacier in Alaska, illustrating the ability of such systems to properly collect and log sensor measurements, even in harsh arctic environments.
  • Item
    Developing Monocular Visual Odometry and Pose Estimation for Arctic Environments
    (Georgia Institute of Technology, 2010-03) Williams, Stephen ; Howard, Ayanna M.
    Arctic regions present one of the harshest environments on Earth for people or mobile robots, yet many important scientific studies, particularly those involving climate change, require measurements from these areas. For the successful deployment of mobile sensors in the Arctic, a high-quality localization system is required. Although a global positioning system can provide coarse positioning (within several meters), it cannot provide any orientation information. A single-camera-pose-estimation system is presented, based on visual odometry techniques, which is capable of operating in the feature-poor environments of the Arctic. To validate the system, a prototype rover was developed and fielded on a glacier in Alaska. The resulting pose estimates compare favorably to values obtained by hand registering the same video sequence. Although pose errors do accumulate over time, these errors are typical of a standard odometry system but obtained in an environment where standard odometry is not practical.
  • Item
    Probabilistic Analysis of Market-Based Algorithms for Initial Robotic Formations
    (Georgia Institute of Technology, 2009-06) Viguria Jimenez, Luis Antidio ; Howard, Ayanna M.
    In this paper, we present a probabilistic analysis approach for analyzing market-based algorithms applied to the initial formation problem. These algorithms determine an assignment scheme for associating individual robots with goal positions necessary to achieve a desired formation while minimizing an objective function. The main contribution of this paper is a method that calculates the expected value of the objective function, which allows us to estimate and compare theoretically the performance of two task allocation algorithms. This probabilistic analysis is applied in different runtime scenarios. We validate our approach through both simulations and experiments with real robots.
  • Item
    Automatic Generation of Persistent Formations for Multi-Agent Networks Under Range Constraints
    (Georgia Institute of Technology, 2009-06) Smith, Brian Stephen ; Egerstedt, Magnus B. ; Howard, Ayanna M.
    In this paper we present a collection of graph-based methods for determining if a team of mobile robots, subjected to sensor and communication range constraints, can persistently achieve a specified formation. What we mean by this is that the formation, once achieved, will be preserved by the direct maintenance of the smallest subset of all possible pairwise inter-agent distances. In this context, formations are defined by sets of points separated by distances corresponding to desired inter-agent distances. Further, we provide graph operations to describe agent interactions that implement a given formation, as well as an algorithm that, given a persistent formation, automatically generates a sequence of such operations. Experimental results are presented that illustrate the operation of the proposed methods on real robot platforms.
  • Item
    Automatic Formation Deployment of Decentralized Heterogeneous Multiple-Robot Networks with Limited Sensing Capabilities
    (Georgia Institute of Technology, 2009-05) Smith, Brian Stephen ; Wang, Jiuguang ; Egerstedt, Magnus B. ; Howard, Ayanna M.
    Heterogeneous multi-robot networks require novel tools for applications that require achieving and maintaining formations. This is the case for distributing sensing devices with heterogeneous mobile sensor networks. Here, we consider a heterogeneous multi-robot network of mobile robots. The robots have a limited range in which they can estimate the relative position of other network members. The network is also heterogeneous in that only a subset of robots have localization ability. We develop a method for automatically configuring the heterogeneous network to deploy a desired formation at a desired location. This method guarantees that network members without localization are deployed to the correct location in the environment for the sensor placement.