Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 1267
  • Item
    Transferring Embodied Concepts Between Perceptually Heterogeneous Robots
    (Georgia Institute of Technology, 2009) Kira, Zsolt ; Georgia Institute of Technology. College of Computing ; Georgia Institute of Technology. Mobile Robot Laboratory
    This paper explores methods and representations that allow two perceptually heterogeneous robots, each of which represents concepts via grounded properties, to transfer knowledge despite their differences. This is an important issue, as it will be increasingly important for robots to communicate and effectively share knowledge to speed up learning as they become more ubiquitous.We use Gӓrdenfors’ conceptual spaces to represent objects as a fuzzy combination of properties such as color and texture, where properties themselves are represented as Gaussian Mixture Models in a metric space. We then use confusion matrices that are built using instances from each robot, obtained in a shared context, in order to learn mappings between the properties of each robot. These mappings are then used to transfer a concept from one robot to another, where the receiving robot was not previously trained on instances of the objects. We show in a 3D simulation environment that these models can be successfully learned and concepts can be transferred between a ground robot and an aerial quadrotor robot.
  • Item
    Continuity and Smoothness Properties of Nonlinear Optimization-Based Feedback Controllers
    (Georgia Institute of Technology, 2015) Morris, Benjamin J. ; Powell, Matthew J. ; Ames, Aaron D. ; Georgia Institute of Technology. Institute for Robotics and Intelligent Machines ; Georgia Institute of Technology. School of Electrical and Computer Engineering ; Georgia Institute of Technology. School of Mechanical Engineering ; Matrix Computing
    Online optimization-based controllers are becoming increasingly prevalent as a means to control complex high-dimensional nonlinear systems, e.g., bipedal and humanoid robots, due to their ability to balance multiple control objectives subject to input constraints. Motivated by these applications, the goal of this paper is to explore the continuity and smoothness properties of feedback controllers that are formulated as quadratic programs (QPs). We begin by drawing connections between these optimization-based controllers and a family of perturbed nonlinear programming problems commonly studied in operations research. With a view towards robotic systems, some existing results on perturbed nonlinear programming problems are extended and specialized to address conditions that arise when quadratic programs are used to enforce the convergence of control Lyapunov functions (CLFs). The main result of this paper is a novel set of conditions on the continuity of QPs that can be used when a subset of the constraints vanishes. A simulation study of position regulation in the compass gait biped demonstrates how the new conditions of this paper can be applied to more complex robotic systems.
  • Item
    Non-contact versus contact-based sensing methodologies for in-home upper arm robotic rehabilitation
    (Georgia Institute of Technology, 2013-06) Howard, Ayanna M. ; Brooks, Douglas Antwonne ; Brown, Edward ; Gebregiorgis, Adey ; Chen, Yu-ping ; Georgia Institute of Technology. Human-Automation Systems Lab ; Georgia Institute of Technology. School of Electrical and Computer Engineering ; Georgia Institute of Technology. Institute for Robotics and Intelligent Machines ; Rochester Institute of Technology. Dept. of Electrical Engineering ; Georgia State University. Dept. of Physical Therapy
    In recent years, robot-assisted rehabilitation has gained momentum as a viable means for improving outcomes for therapeutic interventions. Such therapy experiences allow controlled and repeatable trials and quantitative evaluation of mobility metrics. Typically though these robotic devices have been focused on rehabilitation within a clinical setting. In these traditional robot-assisted rehabilitation studies, participants are required to perform goal-directed movements with the robot during a therapy session. This requires physical contact between the participant and the robot to enable precise control of the task, as well as a means to collect relevant performance data. On the other hand, non-contact means of robot interaction can provide a safe methodology for extracting the control data needed for in-home rehabilitation. As such, in this paper we discuss a contact and non-contact based method for upper-arm rehabilitation exercises that enables quantification of upper-arm movements. We evaluate our methodology on upper-arm abduction/adduction movements and discuss the advantages and limitations of each approach as applied to an in-home rehabilitation scenario.
  • Item
    A Robotic System for Reaching in Dense Clutter that Integrates Model Predictive Control, Learning, Haptic Mapping, and Planning
    (Georgia Institute of Technology, 2014-09) Bhattacharjee, Tapomayukh ; Grice, Phillip M. ; Kapusta, Ariel ; Killpack, Marc D. ; Park, Daehyung ; Kemp, Charles C. ; Georgia Institute of Technology. Institute for Robotics and Intelligent Machines ; Georgia Institute of Technology. Healthcare Robotics Lab ; Brigham Young University. Department of Mechanical Engineering
    We present a system that enables a robot to reach locations in dense clutter using only haptic sensing. Our system integrates model predictive control [1], learned initial conditions [2], tactile recognition of object types [3], haptic mapping, and geometric planning to efficiently reach locations using whole- arm tactile sensing [4]. We motivate our work, present a system architecture, summarize each component of the system, and present results from our evaluation of the system reaching to target locations in dense artificial foliage.
  • Item
    Noise Maps for Acoustically Sensitive Navigation
    (Georgia Institute of Technology, 2004) Martinson, Eric ; Arkin, Ronald C. ; Georgia Institute of Technology. College of Computing
    More and more robotic applications are equipping robots with microphones to improve the sensory information available to them. However, in most applications the auditory task is very low-level, only processing data and providing auditory event information to higher-level navigation routines. If the robot, and therefore the microphone, ends up in a bad acoustic location, then the results from that sensor will remain noisy and potentially useless for accomplishing the required task. To solve this problem, there are at least two possible solutions. The first is to provide bigger and more complex filters, which is the traditional signal processing approach. An alternative solution is to move the robot in concert with providing better audition. In this work, the second approach is followed by introducing noise maps as a tool for acoustically sensitive navigation. A noise map is a guide to noise in the environment, pinpointing locations which would most likely interfere with auditory sensing. A traditional noise map, in an acoustic sense, is a graphical display of the average sound pressure level at any given location. An area with high sound pressure level corresponds to high ambient noise that could interfere with an auditory application. Such maps can be either created by hand, or by allowing the robot to first explore the environment. Converted into a potential field, a noise map then becomes a useful tool for reducing the interference from ambient noise. Preliminary results with a real robot on the creation and use of noise maps are presented.
  • Item
    Development of a Mobile Arctic Sensor Node for Earth-Science Data Collection Applications
    (Georgia Institute of Technology, 2009-04) Williams, Stephen ; Hurst, Michael ; Howard, Ayanna M. ; Georgia Institute of Technology. Human-Automation Systems Lab ; Georgia Institute of Technology. School of Electrical and Computer Engineering ; Georgia Institute of Technology. Center for Robotics and Intelligent Machines
  • Item
    Educational Value of Experiments on Life Support Systems with Ground-Based Aquatic Habitats
    (Georgia Institute of Technology, 2012-07) Drayer, Gregorio E. ; Howard, Ayanna M. ; Georgia Institute of Technology. Human-Automation Systems Lab ; Georgia Institute of Technology. School of Electrical and Computer Engineering ; Georgia Institute of Technology. Center for Robotics and Intelligent Machines
    On April 10th 2010, at the Kennedy Space Center, President Barack Obama pronounced his “Remarks on Space Exploration in the 21st Century." The President included closed- loop life support systems (LSS) as a technology that “can help improve daily lives of people here on Earth, as well as testing and improving upon capabilities in space." A challenge to enable research on LSS is the need for educational capacities that may open up opportunities for teachers and students to teach, learn, and experiment with a small-scale version of these systems. Such is the case in higher-education institutions with programs in life sciences and engineering. These may have educational platforms available in their laboratories to, for example, study attributes of robustness or optimality in controllers driving servomechanisms and electric motors, but there is no small-scale platform available to study the ecophysiological performance of higher plants in an isolated artificial ecosystem. This paper presents aquatic habitats as educational platforms for experiments in closed-loop LSS, and the lessons learned while working with undergraduate students at the Human-Automation Systems Lab of the Georgia Institute of Technology. It presents the challenges that these systems pose to students in engineering and sciences, and highlights the opportunities to support higher-education-level teaching and learning of concepts in science, technology, engineering, and mathematics (STEM) fields.
  • Item
    Reactive Tuning of Target Estimate Accuracy in Multi-Sensor Data Fusion
    (Georgia Institute of Technology, 2007-01) Xiong, Ning ; Christensen, Henrik I. ; Svensson, Per ; Georgia Institute of Technology. College of Computing ; Swedish Defense Research Agency. Dept. of Data and Information Fusion ; Georgia Institute of Technology. Center for Robotics and Intelligent Machines ; Kungl. Tekniska Högskolan. Centrum för Autonoma System
    Dealing with conflicting and target-specific requirements is an important issue in multi-sensor and multi-target tracking. This paper aims to allocate sensing resources among various targets in reaction to individual information requests. The approach proposed is to introduce agents for every relevant target responsible for its tracking. Such agents are expected to bargain with each other for a division of resources. A bilateral negotiation model is established for resource allocation in two-target tracking. The applications of agent negotiation to target covariance tuning are illustrated together with simulation results presented. Moreover, we suggest a way of organizing simultaneous one-to-one negotiations, making our negotiation model still applicable in scenarios of tracking more than two targets.
  • Item
    RF vision: RFID receive signal strength indicator (RSSI) images for sensor fusion and mobile manipulation
    (Georgia Institute of Technology, 2009-10) Deyle, Travis ; Nguyen, Hai ; Reynolds, Matt S. ; Kemp, Charles C. ; Georgia Institute of Technology. Healthcare Robotics Lab ; Georgia Institute of Technology. Center for Robotics and Intelligent Machines
    In this work we present a set of integrated methods that enable an RFID-enabled mobile manipulator to approach and grasp an object to which a self-adhesive passive (battery-free) UHF RFID tag has been affixed. Our primary contribution is a new mode of perception that produces images of the spatial distribution of received signal strength indication (RSSI) for each of the tagged objects in an environment. The intensity of each pixel in the 'RSSI image' is the measured RF signal strength for a particular tag in the corresponding direction. We construct these RSSI images by panning and tilting an RFID reader antenna while measuring the RSSI value at each bearing. Additionally, we present a framework for estimating a tagged object's 3D location using fused ID-specific features derived from an RSSI image, a camera image, and a laser range finder scan. We evaluate these methods using a robot with actuated, long-range RFID antennas and finger-mounted short-range antennas. The robot first scans its environment to discover which tagged objects are within range, creates a user interface, orients toward the user-selected object using RF signal strength, estimates the 3D location of the object using an RSSI image with sensor fusion, approaches and grasps the object, and uses its finger-mounted antennas to confirm that the desired object has been grasped. In our tests, the sensor fusion system with an RSSI image correctly located the requested object in 17 out of 18 trials (94.4%), an 11.1% improvement over the system's performance when not using an RSSI image. The robot correctly oriented to the requested object in 8 out of 9 trials (88.9%), and in 3 out of 3 trials the entire system successfully grasped the object selected by the user.
  • Item
    Factor Graphs and GTSAM: A Hands-on Introduction
    (Georgia Institute of Technology, 2012-09) Dellaert, Frank ; Georgia Institute of Technology. Center for Robotics and Intelligent Machines
    In this document I provide a hands-on introduction to both factor graphs and GTSAM. Factor graphs are graphical models (Koller and Friedman, 2009) that are well suited to modeling complex estimation problems, such as Simultaneous Localization and Mapping (SLAM) or Structure from Motion (SFM). You might be familiar with another often used graphical model, Bayes networks, which are directed acyclic graphs. A factor graph, however, is a bipartite graph consisting of factors connected to variables. The variables represent the unknown random variables in the estimation problem, whereas the factors represent probabilistic information on those variables, derived from measurements or prior knowledge. In the following sections I will show many examples from both robotics and vision. The GTSAM toolbox (GTSAM stands for “Georgia Tech Smoothing and Mapping”) toolbox is a BSD-licensed C++ library based on factor graphs, developed at the Georgia Institute of Technology by myself, many of my students, and collaborators. It provides state of the art solutions to the SLAM and SFM problems, but can also be used to model and solve both simpler and more complex estimation problems. It also provides a MATLAB interface which allows for rapid prototype development, visualization, and user interaction. GTSAM exploits sparsity to be computationally efficient. Typically measurements only provide information on the relationship between a handful of variables, and hence the resulting factor graph will be sparsely connected. This is exploited by the algorithms implemented in GTSAM to reduce computational complexity. Even when graphs are too dense to be handled efficiently by direct methods, GTSAM provides iterative methods that are quite efficient regardless. You can download the latest version of GTSAM at http://tinyurl.com/gtsam.