Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 40
  • Item
    A Practical Approach for Recognizing Eating Moments With Wrist-Mounted Inertial Sensing
    (Georgia Institute of Technology, 2015) Thomaz, Edison ; Essa, Irfan ; Abowd, Gregory D.
    Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple on-body sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with F-scores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.
  • Item
    Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study
    (Georgia Institute of Technology, 2015) Thomaz, Edison ; Zhang, Cheng ; Essa, Irfan ; Abowd, Gregory D.
    Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.
  • Item
    Inferring Object Properties from Incidental Contact with a Tactile-Sensing Forearm
    (Georgia Institute of Technology, 2014-09) Bhattacharjee, Tapomayukh ; Rehg, James M. ; Kemp, Charles C.
    Whole-arm tactile sensing enables a robot to sense properties of contact across its entire arm. By using this large sensing area, a robot has the potential to acquire useful information from incidental contact that occurs while performing a task. Within this paper, we demonstrate that data-driven methods can be used to infer mechanical properties of objects from incidental contact with a robot’s forearm. We collected data from a tactile-sensing forearm as it made contact with various objects during a simple reaching motion. We then used hidden Markov models (HMMs) to infer two object properties (rigid vs. soft and fixed vs. movable) based on low-dimensional features of time-varying tactile sensor data (maximum force, contact area, and contact motion). A key issue is the extent to which data-driven methods can generalize to robot actions that differ from those used during training. To investigate this issue, we developed an idealized mechanical model of a robot with a compliant joint making contact with an object. This model provides intuition for the classification problem. We also conducted tests in which we varied the robot arm’s velocity and joint stiffness. We found that, in contrast to our previous methods [1], multivariate HMMs achieved high cross-validation accuracy and successfully generalized what they had learned to new robot motions with distinct velocities and joint stiffnesses.
  • Item
    Trend and Bounds for Error Growth in Controlled Lagrangian Particle Tracking
    (Georgia Institute of Technology, 2012-12-18) Szwaykowska, Klementyna ; Zhang, Fumin
    This paper establishes the method of controlled Lagrangian particle tracking (CLPT) to analyse the offsets between physical positions of marine robots in the ocean and simulated positions of controlled particles in an ocean model. The offset, which we term the CLPT error, demonstrates distinguished characteristics not previously seen in drifters and floats that cannot be actively controlled. The CLPT error growth over time is exponential until it reaches a turning point that only depends on the resolution of the ocean model. After this turning point, the error growth slows down significantly to polynomial functions of time. In the ideal case, a theoretical upper threshold on exponential growth of CLPT error can be derived. These characteristics are proved theoretically, verified via simulation, and justified with ocean experimental data. The method of CLPT may be applied to improve the accuracy of ocean circulation models and the performance of navigation algorithms for marine robots.
  • Item
    Keyframe-based Learning from Demonstration Method and Evaluation
    (Georgia Institute of Technology, 2012-06) Akgun, Baris ; Cakmak, Maya ; Jiang, Karl ; Thomaz, Andrea L.
    We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a human-robot interaction perspective. Our approach –Keyframe-based Learning from Demonstration (KLfD)– takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.
  • Item
    Multi-Cue Contingency Detection
    (Georgia Institute of Technology, 2012-04) Lee, Jinhan ; Chao, Crystal ; Bobick, Aaron F. ; Thomaz, Andrea L.
    The ability to detect a human's contingent response is an essential skill for a social robot attempting to engage new interaction partners or maintain ongoing turn-taking interactions. Prior work on contingency detection focuses on single cues from isolated channels, such as changes in gaze, motion, or sound.We propose a framework that integrates multiple cues for detecting contingency from multimodal sensor data in human-robot interaction scenarios. We describe three levels of integration and discuss our method for performing sensor fusion at each of these levels. We perform a Wizard-of-Oz data collection experiment in a turn-taking scenario in which our humanoid robot plays the turn-taking imitation game “Simon says" with human partners. Using this data set, which includes motion and body pose cues from a depth and color image and audio cues from a microphone, we evaluate our contingency detection module with the proposed integration mechanisms and show gains in accuracy of our multi-cue approach over single-cue contingency detection. We show the importance of selecting the appropriate level of cue integration as well as the implications of varying the referent event parameter.
  • Item
    Monitoring Dressing Activity Failures Through RFID and Video
    (Georgia Institute of Technology, 2012) Matic, Aleksandar ; Mehta, P. ; Rehg, James M. ; Osmani, Venet ; Mayora, Oscar
  • Item
    Automated Macular Pathology Diagnosis in Retinal OCT Images Using Multi-Scale Spatial Pyramid and Local Binary Patterns in Texture and Shape Encoding
    (Georgia Institute of Technology, 2011-10) Liu, Yu-Ying ; Chen, Mei ; Ishikawa, Hiroshi ; Wollstein, Gadi ; Schuman, Joel S. ; Rehg, James M.
    We address a novel problem domain in the analysis of optical coherence tomography (OCT) images: the diagnosis of multiple macular pathologies in retinal OCT images. The goal is to identify the presence of normal macula and each of three types of macular pathologies, namely, macular edema, macular hole, and age-related macular degeneration, in the OCT slice centered at the fovea. We use a machine learning approach based on global image descriptors formed from a multi-scale spatial pyramid. Our local features are dimension-reduced Local Binary Pattern histograms, which are capable of encoding texture and shape information in retinal OCT images and their edge maps, respectively. Our representation operates at multiple spatial scales and granularities, leading to robust performance. We use 2-class Support Vector Machine classifiers to identify the presence of normal macula and each of the three pathologies. To further discriminate sub-types within a pathology, we also build a classifier to differentiate full-thickness holes from pseudo-holes within the macular hole category. We conduct extensive experiments on a large dataset of 326 OCT scans from 136 subjects. The results show that the proposed method is very effective (all AUC > 0:93).
  • Item
    Dynamic Chess: Strategic Planning for Robot Motion
    (Georgia Institute of Technology, 2011-05) Kunz, Tobias ; Kingston, Peter ; Stilman, Mike ; Egerstedt, Magnus B.
    We introduce and experimentally validate a novel algorithmic model for physical human-robot interaction with hybrid dynamics. Our computational solutions are complementary to passive and compliant hardware. We focus on the case where human motion can be predicted. In these cases, the robot can select optimal motions in response to human actions and maximize safety. By representing the domain as a Markov Game, we enable the robot to not only react to the human but also to construct an infinite horizon optimal policy of actions and responses. Experimentally, we apply our model to simulated robot sword defense. Our approach enables a simulated 7-DOF robot arm to block known attacks in any sequence. We generate optimized blocks and apply game theoretic tools to choose the best action for the defender in the presence of an intelligent adversary.
  • Item
    Biodynamic Feedthrough Compensation and Experimental Results Using a Backhoe
    (Georgia Institute of Technology, 2011-03) Heather C. Humphreys ; Book, Wayne J. ; Huggins, James D.
    In some operator-controlled machines, motion of the controlled machine excites motion of the human operator, which is fed back into the control device, causing unwanted input and sometimes instability; this phenomenon is termed biodynamic feedthrough. In operation of backhoes and excavators, biodynamic feedthrough causes control performance degradation. This work utilizes a previously developed advanced backhoe user interface which uses coordinated position control with haptic feedback, using a SensAble Omni six degree-of-freedom haptic display device. Backhoe user interface designers and our own experiments indicate that biodynamic feedthrough produces undesirable oscillations in output with conventionally controlled backhoes and excavators, and it is even more of a problem with this advanced user interface. Results indicate that the coordinated control provides more intuitive operation, and the haptic feedback relays meaningful information back to the user. But the biodynamic feedthrough problem must be overcome in order for this improved interface to be applicable. For the purposes of reducing model complexity, the system is limited to a single degree of freedom, using fore-aft motion only. This paper investigates what types of controller-based methods of compensation for biodynamic feedthrough are most effective in backhoe operation, and how they can be implemented and tested with human operators.