Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 92
  • Item
    The benefits of other-oriented robot deception in human-robot interaction
    (Georgia Institute of Technology, 2017-04-04) Shim, Jaeeun
    Deception is an essential social behavior for humans, and we can observe human deceptive behaviors in a variety of contexts including sports, culture, education, war, and everyday life. Deception is also used for the purpose of survival in animals and even in plants. From these findings, it is obvious that deception is a general and essential behavior for any species, which raises an interesting research question: can deception be an essential characteristic for robots, especially social robots? Based on this curiosity, this dissertation aimed to develop a robot's deception capabilities, especially in human-robot interaction (HRI) situations. Specifically, the goal of this dissertation is to develop a social robot's deceptive behaviors that can produce benefits for the deceived humans (other-oriented robot deception). To achieve other-oriented robot deception, several scientific contributions were accomplished in this dissertation. A novel taxonomy of robot deception was defined, and a general computational model for a robot's deceptive behaviors was developed based on criminological law. Appropriate HRI contexts in which a robot's other-oriented deception can generate benefits were explored, and a methodology for evaluating a robot's other-oriented deception in appropriate HRI contexts was designed, and studies were conducted with human subjects. Finally, the ethical implications of other-oriented robot deception were also explored and thoughtfully discussed.
  • Item
    Grasp selection strategies for robot manipulation using a superquadric-based object representation
    (Georgia Institute of Technology, 2016-07-29) Huaman, Ana Consuelo
    This thesis presents work on the implementation of a robotic system targeted to perform a set of basic manipulation tasks instructed by a human user. The core motivation on the development of this system was in enabling our robot to achieve these tasks reliably, in a time-efficient manner and under mildly realistic constraints. Robot manipulation as a field has grown exponentially during the last years, presenting us with a vast array of robots exhibiting skills as sophisticated as preparing dinner, making an expresso or operating a drill. These complex tasks are in general achieved by using equally complex frameworks, assuming extensive pre-existing knowledge, such as perfect environment knowledge, sizable amounts of training data or availability of crowdsourcing resources. In this work we postulate that elementary tasks, such as pick-up, pick-and-place and pouring, can be realized with online algorithms and a limited knowledge of the objects to be manipulated. The presented work shows a fully implemented pipeline where each module is designed to meet the core requirements specified above. We present a number of experiments involving a database of 10 household objects used in 3 selected elementary manipulation tasks. Our contributions are distributed in each module of our pipeline: (1) We demonstrate that superquadrics are useful primitive shapes suitable to represent on-the-fly a considerable number of convex household objects; their parametric nature (3 axis and 2 shape parameters) is shown to be helpful to represent simple semantic labels for objects (i.e. for a pouring task) useful for grasp and motion planning. (2) We introduce a hand-and-arm metric that considers both grasp robustness and arm end-comfort to select grasps for simple pick-up tasks. We show with real and simulation results that considering both hand and arm aspects of the manipulation task helps to select grasps that are easier to execute in real environments without sacrificing grasp stability on the process. (3) We present grasp selection and planning strategies that exploit task constraints to select the more appropriate grasp to carry out a manipulation task in an online and efficient manner (in terms of planning and execution time).
  • Item
    Self-reconfigurable multi-robot systems
    (Georgia Institute of Technology, 2016-04-12) Pickem, Daniel
    Self-reconfigurable robotic systems are variable-morphology machines capable of changing their overall structure by rearranging the modules they are composed of. Individual modules are capable of connecting and disconnecting to and from one another, which allows the robot to adapt to changing environments. Optimally reconfiguring such systems is computationally prohibitive and thus in general self-reconfiguration approaches aim at approximating optimal solutions. Nonetheless, even for approximate solutions, centralized methods scale poorly in the number of modules. Therefore, the objective of this research is the development of decentralized self-reconfiguration methods for modular robotic systems. Building on completeness results of the centralized algorithms in this work, decentralized methods are developed that guarantee stochastic convergence to a given target shape. A game-theoretic approach lays the theoretical foundation of a novel potential game-based formulation of the self-reconfiguration problem. Furthermore, two extensions to the basic game-theoretic algorithm are proposed that enable agents to modify the algorithms' parameters during runtime and improve convergence times. The flexibility in the choice of utility functions together with runtime adaptability makes the presented approach and the underlying theory suitable for a range of problems that rely on decentralized local control to guarantee global, emerging properties. The experimental evaluation of the presented algorithms relies on a newly developed multi-robotic testbed called the "Robotarium" that is equipped with custom-designed miniature robots, the "GRITSBots". The Robotarium provides hardware validation of self-reconfiguration on robots but more importantly introduces a novel paradigm for remote accessibility of multi-agent testbeds with the goal of lowering the barrier to entrance into the field of multi-robot research and education.
  • Item
    Planning in constraint space for multi-body manipulation tasks
    (Georgia Institute of Technology, 2016-04-05) Erdogan, Can
    Robots are inherently limited by physical constraints on their link lengths, motor torques, battery power and structural rigidity. To thrive in circumstances that push these limits, such as in search and rescue scenarios, intelligent agents can use the available objects in their environment as tools. Reasoning about arbitrary objects and how they can be placed together to create useful structures such as ramps, bridges or simple machines is critical to push beyond one's physical limitations. Unfortunately, the solution space is combinatorial in the number of available objects and the configuration space of the chosen objects and the robot that uses the structure is high dimensional. To address these challenges, we propose using constraint satisfaction as a means to test the feasibility of candidate structures and adopt search algorithms in the classical planning literature to find sufficient designs. The key idea is that the interactions between the components of a structure can be encoded as equality and inequality constraints on the configuration spaces of the respective objects. Furthermore, constraints that are induced by a broadly defined action, such as placing an object on another, can be grouped together using logical representations such as Planning Domain Definition Language (PDDL). Then, a classical planning search algorithm can reason about which set of constraints to impose on the available objects, iteratively creating a structure that satisfies the task goals and the robot constraints. To demonstrate the effectiveness of this framework, we present both simulation and real robot results with static structures such as ramps, bridges and stairs, and quasi-static structures such as lever-fulcrum simple machines.
  • Item
    Navigation behavior design and representations for a people aware mobile robot system
    (Georgia Institute of Technology, 2016-01-15) Cosgun, Akansel
    There are millions of robots in operation around the world today, and almost all of them operate on factory floors in isolation from people. However, it is now becoming clear that robots can provide much more value assisting people in daily tasks in human environments. Perhaps the most fundamental capability for a mobile robot is navigating from one location to another. Advances in mapping and motion planning research in the past decades made indoor navigation a commodity for mobile robots. Yet, questions remain on how the robots should move around humans. This thesis advocates the use of semantic maps and spatial rules of engagement to enable non-expert users to effortlessly interact with and control a mobile robot. A core concept explored in this thesis is the Tour Scenario, where the task is to familiarize a mobile robot to a new environment after it is first shipped and unpacked in a home or office setting. During the tour, the robot follows the user and creates a semantic representation of the environment. The user labels objects, landmarks and locations by performing pointing gestures and using the robot's user interface. The spatial semantic information is meaningful to humans, as it allows providing commands to the robot such as ``bring me a cup from the kitchen table". While the robot is navigating towards the goal, it should not treat nearby humans as obstacles and should move in a socially acceptable manner. Three main navigation behaviors are studied in this work. The first behavior is the point-to-point navigation. The navigation planner presented in this thesis borrows ideas from human-human spatial interactions, and takes into account personal spaces as well as reactions of people who are in close proximity to the trajectory of the robot. The second navigation behavior is person following. After the description of a basic following behavior, a user study on person following for telepresence robots is presented. Additionally, situation awareness for person following is demonstrated, where the robot facilitates tasks by predicting the intent of the user and utilizing the semantic map. The third behavior is person guidance. A tour-guide robot is presented with a particular application for visually impaired users.
  • Item
    A control theoretic perspective on learning in robotics
    (Georgia Institute of Technology, 2015-12-16) O'Flaherty, Rowland Wilde
    For robotic systems to continue to move towards ubiquity, robots need to be more autonomous. More autonomy dictates that robots need to be able to make better decisions. Control theory and machine learning are fields of robotics that focus on the decision making process. However, each of these fields implements decision making at different levels of abstraction and at different time scales. Control theory defines low-level decisions at high rates, while machine learning defines high-level decision at low rates. The objective of this research is to integrate tools from both machine leaning and control theory to solve higher dimensional, complex problems, and to optimize the decision making process. Throughout this research, multiple algorithms were created that use concepts from both control theory and machine learning, which provide new tools for robots to make better decisions. One algorithm enables a robot to learn how to optimally explore an unknown space, and autonomously decide when to explore for new information or exploit its current information. Another algorithm enables a robot to learn how to locomote with complex dynamics. These algorithms are evaluated both in simulation and on real robots. The results and analysis of these experiments are presented, which demonstrate the utility of the algorithms introduced in this work. Additionally, a new notion of “learnability” is introduced to define and determine when a given dynamical system has the ability to gain knowledge to optimize a given objective function.
  • Item
    Towards a terradynamics of legged locomotion on homogeneous and Heterogeneous granular media through robophysical approaches
    (Georgia Institute of Technology, 2015-11-16) Qian, Feifei
    The objective of this research is to discover principles of ambulatory locomotion on homogeneous and heterogeneous granular substrates and create models of animal and robot interaction within such environments. Since interaction with natural substrates is too complicated to model, we take a robophysics approach – we create a terrain generation system where properties of heterogeneous multi-component substrates can be systematically varied to emulate a wide range of natural terrain properties such as compaction, orientation, obstacle shape/size/distribution, and obstacle mobility within the substrate. A schematic of the proposed system is discussed in detail in the body of this dissertation. Control of such substrates will allow for the systematic exploration of parameters of substrate properties, particularly substrate stiffness and heterogeneities. With this terrain creation system, we systematically explore locomotor strategies of simplified laboratory robots when traversing over different terrain properties. A key feature of this proposed work is the ability to generate general interaction models of locomotor appendages with such complex substrates. These models will aid in the design and control of future robots with morphologies and control strategies that allow for effective navigation on a large diversity of terrains, expanding the scope of terramechanics from large tracked and treaded vehicles on homogeneous ground to arbitrarily shaped and actuated locomotors moving on complex heterogeneous terrestrial substrates.
  • Item
    Machine learning and dynamic programming algorithms for motion planning and control
    (Georgia Institute of Technology, 2015-11-16) Arslan, Oktay
    Robot motion planning is one of the central problems in robotics, and has received considerable amount of attention not only from roboticists but also from the control and artificial intelligence (AI) communities. Despite the different types of applications and physical properties of robotic systems, many high-level tasks of autonomous systems can be decomposed into subtasks which require point-to-point navigation while avoiding infeasible regions due to the obstacles in the workspace. This dissertation aims at developing a new class of sampling-based motion planning algorithms that are fast, efficient and asymptotically optimal by employing ideas from Machine Learning (ML) and Dynamic Programming (DP). First, we interpret the robot motion planning problem as a form of a machine learning problem since the underlying search space is not known a priori, and utilize random geometric graphs to compute consistent discretizations of the underlying continuous search space. Then, we integrate existing DP algorithms and ML algorithms to the framework of sampling-based algorithms for better exploitation and exploration, respectively. We introduce a novel sampling-based algorithm, called RRT#, that improves upon the well-known RRT* algorithm by leveraging value and policy iteration methods as new information is collected. The proposed algorithms yield provable guarantees on correctness, completeness and asymptotic optimality. We also develop an adaptive sampling strategy by considering exploration as a classification (or regression) problem, and use online machine learning algorithms to learn the relevant region of a query, i.e., the region that contains the optimal solution, without significant computational overhead. We then extend the application of sampling-based algorithms to a class of stochastic optimal control problems and problems with differential constraints. Specifically, we introduce the Path Integral - RRT algorithm, for solving optimal control of stochastic systems and the CL-RRT# algorithm that uses closed-loop prediction for trajectory generation for differential systems. One of the key benefits of CL-RRT# is that for many systems, given a low-level tracking controller, it is easier to handle differential constraints, so complex steering procedures are not needed, unlike most existing kinodynamic sampling-based algorithms. Implementation results of sampling-based planners for route planning of a full-scale autonomous helicopter under the Autonomous Aerial Cargo/Utility System Program (AACUS) program are provided.
  • Item
    Developing an engagement and social interaction model for a robotic educational agent
    (Georgia Institute of Technology, 2015-11-16) Brown, LaVonda N.
    Effective educational agents should accomplish four essential goals during a student's learning process: 1) monitor engagement, 2) re-engage when appropriate, 3) teach novel tasks, and 4) improve retention. In this dissertation, we focus on all of these objectives through use of a teaching device (computer, tablet, or virtual reality game) and a robotic educational agent. We begin by developing and validating an engagement model based on the interactions between the student and the teaching device. This model uses time, performance, and/or eye gaze to determine the student's level of engagement. We then create a framework for implementing verbal and nonverbal, or gestural, behaviors on a humanoid robot and evaluate its perception and effectiveness for social interaction. These verbal and nonverbal behaviors are applied throughout the learning scenario to re-engage the students when the engagement model deems it necessary. Finally, we describe and validate the entire educational system that uses the engagement model to activate the behavioral strategies embedded on the robot when learning a new task. We then follow-up this study to evaluate student retention when using this system. The outcome of this research is the development of an educational system that effectively monitors student engagement, applies behavioral strategies, teaches novel tasks, and improves student retention to achieve individualized learning.
  • Item
    Robots learning actions and goals from everyday people
    (Georgia Institute of Technology, 2015-11-16) Akgun, Baris
    Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills. The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way. A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.