Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 1093
  • Item
    Pilot Study For Examining Human-Robot Trust In Healthcare Interventions Involving Sensitive Personal Information
    (Georgia Institute of Technology, 2017-07) Xu, Jin ; Howard, Ayanna M.
    Socially interactive humanoid robots have been widely used in physical therapy and rehabilitation for children with motor disabilities. Previous studies have shown that embedding human-like behavior on a robotic playmate improves the efficacy of the physical therapy through corrective feedback. Understanding of trust in such scenarios is especially important since the behavior of the robot impacts the outcomes of the interaction through changes of trust, thus affecting rehabilitation performance. The objective of this pilot study was to examine aspects of trust between humans and socially interactive humanoid robots when robots provide incorrect personal information about them. A between-subject experiment was conducted with eight participants. Each participant was randomly assigned to one of the following conditions: 1) Reliable robot or 2) Faulty robot. Survey responses about trust were collected after interacting with the robot. Results indicate a trend showing that humans will trust a socially interactive robot with their personal information, even if the robot makes a mistake. These results can provide insights into the development of a robotic therapy coach but also motivates future studies to examine elements of human-robot trust in different healthcare scenarios.
  • Item
    The benefits of other-oriented robot deception in human-robot interaction
    (Georgia Institute of Technology, 2017-04-04) Shim, Jaeeun
    Deception is an essential social behavior for humans, and we can observe human deceptive behaviors in a variety of contexts including sports, culture, education, war, and everyday life. Deception is also used for the purpose of survival in animals and even in plants. From these findings, it is obvious that deception is a general and essential behavior for any species, which raises an interesting research question: can deception be an essential characteristic for robots, especially social robots? Based on this curiosity, this dissertation aimed to develop a robot's deception capabilities, especially in human-robot interaction (HRI) situations. Specifically, the goal of this dissertation is to develop a social robot's deceptive behaviors that can produce benefits for the deceived humans (other-oriented robot deception). To achieve other-oriented robot deception, several scientific contributions were accomplished in this dissertation. A novel taxonomy of robot deception was defined, and a general computational model for a robot's deceptive behaviors was developed based on criminological law. Appropriate HRI contexts in which a robot's other-oriented deception can generate benefits were explored, and a methodology for evaluating a robot's other-oriented deception in appropriate HRI contexts was designed, and studies were conducted with human subjects. Finally, the ethical implications of other-oriented robot deception were also explored and thoughtfully discussed.
  • Item
    Selfie-Presentation in Everyday Life: A Large-scale Characterization of Selfie Contexts on Instagram
    (Georgia Institute of Technology, 2017) Deeb-Swihart, Julia ; Polack, Christopher ; Gilbert, Eric ; Essa, Irfan
    Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us full-circle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.
  • Item
    Towards Using Visual Attributes to Infer Image Sentiment Of Social Events
    (Georgia Institute of Technology, 2017) Ahsan, Unaiza ; De Choudhury, Munmun ; Essa, Irfan
    Widespread and pervasive adoption of smartphones has led to instant sharing of photographs that capture events ranging from mundane to life-altering happenings. We propose to capture sentiment information of such social event images leveraging their visual content. Our method extracts an intermediate visual representation of social event images based on the visual attributes that occur in the images going beyond sentiment-specific attributes. We map the top predicted attributes to sentiments and extract the dominant emotion associated with a picture of a social event. Unlike recent approaches, our method generalizes to a variety of social events and even to unseen events, which are not available at training time. We demonstrate the effectiveness of our approach on a challenging social event image dataset and our method outperforms state-of-the-art approaches for classifying complex event images into sentiments.
  • Item
    Haptic Simulation for Robot-Assisted Dressing
    (Georgia Institute of Technology, 2017) Yu, Wenhao ; Kapusta, Ariel ; Tan, Jie ; Kemp, Charles C. ; Turk, Greg ; Liu, C. Karen
    There is a considerable need for assistive dressing among people with disabilities, and robots have the potential to fulfill this need. However, training such a robot would require extensive trials in order to learn the skills of assistive dressing. Such training would be time-consuming and require considerable effort to recruit participants and conduct trials. In addition, for some cases that might cause injury to the person being dressed, it is impractical and unethical to perform such trials. In this work, we focus on a representative dressing task of pulling the sleeve of a hospital gown onto a person’s arm. We present a system that learns a haptic classifier for the outcome of the task given few (2-3) real-world trials with one person. Our system first optimizes the parameters of a physics simulator using real-world data. Using the optimized simulator, the system then simulates more haptic sensory data with noise models that account for randomness in the experiment. We then train hidden Markov Models (HMMs) on the simulated haptic data. The trained HMMs can then be used to classify and predict the outcome of the assistive dressing task based on haptic signals measured by a real robot’s end effector. This system achieves 92.83% accuracy in classifying the outcome of the robot-assisted dressing task with people not included in simulation optimization. We compare our classifiers to those trained on real-world data. We show that the classifiers from our system can categorize the dressing task outcomes more accurately than classifiers trained on ten times more real data.
  • Item
    Formal Performance Guarantees for Behavior-Based Localization Missions
    (Georgia Institute of Technology, 2016-11) Lyons, Damian M. ; Arkin, Ronald C.
    Localization and mapping algorithms can allow a robot to navigate well in an unknown environment. However, whether such algorithms enhance any specific robot mission is currently a matter for empirical validation. In this paper we apply our MissionLab/VIPARS mission design and verification approach to an autonomous robot mission that uses probabilistic localization software. Two approaches to modeling probabilistic localization for verification are presented: a high-level approach, and a sample-based approach which allows run-time code to be embedded in verification. Verification and experimental validation results are presented for two waypoint missions using each method, demonstrating the accuracy of verification, and both are compared with verification of an odometry-only mission, to show the mission-specific benefit of localization.
  • Item
    Grasp selection strategies for robot manipulation using a superquadric-based object representation
    (Georgia Institute of Technology, 2016-07-29) Huaman, Ana Consuelo
    This thesis presents work on the implementation of a robotic system targeted to perform a set of basic manipulation tasks instructed by a human user. The core motivation on the development of this system was in enabling our robot to achieve these tasks reliably, in a time-efficient manner and under mildly realistic constraints. Robot manipulation as a field has grown exponentially during the last years, presenting us with a vast array of robots exhibiting skills as sophisticated as preparing dinner, making an expresso or operating a drill. These complex tasks are in general achieved by using equally complex frameworks, assuming extensive pre-existing knowledge, such as perfect environment knowledge, sizable amounts of training data or availability of crowdsourcing resources. In this work we postulate that elementary tasks, such as pick-up, pick-and-place and pouring, can be realized with online algorithms and a limited knowledge of the objects to be manipulated. The presented work shows a fully implemented pipeline where each module is designed to meet the core requirements specified above. We present a number of experiments involving a database of 10 household objects used in 3 selected elementary manipulation tasks. Our contributions are distributed in each module of our pipeline: (1) We demonstrate that superquadrics are useful primitive shapes suitable to represent on-the-fly a considerable number of convex household objects; their parametric nature (3 axis and 2 shape parameters) is shown to be helpful to represent simple semantic labels for objects (i.e. for a pouring task) useful for grasp and motion planning. (2) We introduce a hand-and-arm metric that considers both grasp robustness and arm end-comfort to select grasps for simple pick-up tasks. We show with real and simulation results that considering both hand and arm aspects of the manipulation task helps to select grasps that are easier to execute in real environments without sacrificing grasp stability on the process. (3) We present grasp selection and planning strategies that exploit task constraints to select the more appropriate grasp to carry out a manipulation task in an online and efficient manner (in terms of planning and execution time).
  • Item
    Time Dependent Control Lyapunov Functions and Hybrid Zero Dynamics for Stable Robotic Locomotion
    (Georgia Institute of Technology, 2016-07) Kolathaya, Shishir ; Hereid, Ayonga ; Ames, Aaron D.
    Implementing state-based parameterized periodic trajectories on complex robotic systems, e.g., humanoid robots, can lead to instability due to sensor noise exacerbated by dynamic movements. As a means of understanding this phenomenon, and motivated by field testing on the humanoid robot DURUS, this paper presents sufficient conditions for the boundedness of hybrid periodic orbits (i.e., boundedness of walking gaits) for time dependent control Lyapunov functions. In particular, this paper considers virtual constraints that yield hybrid zero dynamics with desired outputs that are a function of time or a state-based phase variable. If the difference between the phase variable and time is bounded, we establish exponential boundedness to the zero dynamics surface. These results are extended to hybrid dynamical systems, establishing exponential boundedness of hybrid periodic orbits, i.e., we show that stable walking can be achieved through time-based implementations of state-based virtual constraints. These results are verified on the bipedal humanoid robot DURUS both in simulation and experimentally; it is demonstrated that a close match between time based tracking and state based tracking can be achieved as long as there is a close match between the time and phase based desired output trajectories.
  • Item
    Unification of Locomotion Pattern Generation and Control Lyapunov Function-Based Quadratic Programs
    (Georgia Institute of Technology, 2016-07) Chao, Kenneth Y. ; Powell, Matthew J. ; Ames, Aaron D. ; Hur, Pilwon
    This paper presents a novel method of combining real-time walking pattern generation and constrained nonlinear control to achieve robotic walking under Zero-Moment Point (ZMP) and torque constraints. The proposed method leverages the fact that existing solutions to both walking pattern generation and constrained nonlinear control have been independently constructed as Quadratic Programs (QPs) and that these constructions can be related through an equality constraint on the instantaneous acceleration of the center of mass. Speci cally, the proposed method solves a single Quadratic Program which incorporates elements from Model Predictive Control (MPC) based center of mass planning methods and from rapidly exponentially stabilizing control Lyapunov function (RES-CLF) methods. The resulting QP-based controller simultaneously solves for a COM trajectory that satis es ZMP constraints over a future horizon while also producing joint torques consistent with instantaneous acceleration, torque, ZMP and RES-CLF constraints. The method is developed for simulation and experimental study on a seven-link, planar robot.
  • Item
    Towards Real-Time Parameter Optimization for Feasible Nonlinear Control with Applications to Robot Locomotion
    (Georgia Institute of Technology, 2016-07) Powell, Matthew J. ; Ames, Aaron D.
    This paper considers the application of classical control methods, designed for unconstrained nonlinear systems, to systems with nontrivial input constraints. As shown throughout the literature, unconstrained classical methods can be used to stabilize constrained systems, however, (without modification) these unconstrained methods are not guaranteed to work for a general control problem. In this paper, we propose conditions for which classical unconstrained methods can be guaranteed to exponentially stabilize constrained systems – which we term “feasibility” conditions – and we provide examples of how to construct explicitly feasible controllers. The control design methods leverage control Lyapunov functions (CLF) describing the “desired behavior” of the system; and we claim that in the event that a system’s input constraints prevent the production of an exponentially stabilizing input for a particular CLF, a new, locally feasible CLF must be produced. To this end, we propose a novel hybrid feasibility controller consisting of a continuous-time controller which implements a CLF and a discrete parameter update law which finds feasible controller parameters as needed. Simulation results suggest that the proposed method can be used to overcome certain catastrophic infeasibility events encountered in robot locomotion.