Series
IRIM Seminar Series

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 10 of 106
  • Item
    Robot Learning: Quo Vadis?
    (Georgia Institute of Technology, 2020-11-18) Peters, Jan
    Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent “hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.
  • Item
    Reinforcement Learning: Leveraging Deep Learning for Control
    (Georgia Institute of Technology, 2020-11-04) Buhr, Craig
    Reinforcement learning is getting a lot of attention lately. People are excited about its potential to solve complex problems in areas such as robotics and automated driving, where traditional control methods can be challenging to use. In addition to deep neural nets to represent the policy, reinforcement learning lends itself to control problems because its training incorporates repeated exploration of the environment. As such exploration is time-consuming and costly or dangerous when done with actual hardware, a simulation model is often used to represent the environment. In this talk, we provide an overview of reinforcement learning and its application to teaching a robot to walk. We discuss the differences between reinforcement learning and traditional control methods. Specific topics of reinforcement learning covered in this presentation include: • Creating environment models • Crafting effective reward functions • Deploying to embedded devices through automatic code generation for CPUs and GPUs
  • Item
    From Coexistence to Collaboration: Towards Reliable Collaborative Robots
    (Georgia Institute of Technology, 2020-10-14) Ravichandar, Harish
    The field of robotics has made incredible progress over the past several decades. Indeed, we have built impressive robots capable of performing complex and intricate tasks in a variety of domains. Most modern robots, however, passively coexist with humans while performing pre-specified tasks in predictable environments. As robots become an increasingly integral part of our everyday lives -- from factory floors to our living rooms -- it is imperative that we build robots that can reliably operate and actively collaborate in unstructured environments. This talk will present three key aspects of collaborative robotics that will help us make progress toward this goal. Specifically, we will discuss algorithmic techniques that enable robots to i) consistently and reliably perform manipulation tasks, ii) understand and predict the behavior of other agents involved, and iii) effectively collaborate with other robots and humans.
  • Item
    Star Wars: The Rise of Robots and Intelligent Machines
    (Georgia Institute of Technology, 2020-09-30) Gombolay, Matthew ; Mazumdar, Ellen Y. C. ; Yaszek, Lisa ; Young, Aaron
    A long time ago, in a galaxy far far away, a space opera movie captured the imaginations of roboticists, researchers, and writers from around the world. Over the last 43 years, Star Wars has had an immense impact on our collective perception of robotics. It has introduced some of the most beloved droids as well as one of the most feared cyborgs in science fiction. In this panel, we will discuss how the Star Wars movies have influenced the design of robots and intelligent machines, including prosthetics, cybernetics, and artificial intelligence. We will show examples of how George Lucas portrayed good and evil in different types of technology and how he depicted human-robot teaming. These illustrations have driven how we design and interact with technology to this day. Whether you love or love-to-hate the movies, these are the droids discussions that you are looking for!
  • Item
    Cost of Transport, the Correct Metric for Mobile Systems?
    (Georgia Institute of Technology, 2020-09-16) Mazumdar, Anirban ; Rouse, Elliott ; Sawicki, Gregory W. ; Young, Aaron ; Zhao, Ye
    Energetic cost of locomotion is often the gold standard measures used in autonomous robotic walking for efficiency as well as humans augmented with lower limb wearable robotics. The panel will discuss the relative benefits as well as critical disadvantages to the field’s obsession on energy cost for optimizing robotic systems and controls. Applications to clinical robotics for impaired populations, autonomous biped robotics, and wearable robotics for human augmentation will be discussed. The panel will also discuss potential alternative measures beyond energy cost to assess locomotion systems such as those associated with stability and agility.
  • Item
    Navigation and Mapping for Robot Teams in Uncertain Environments
    (Georgia Institute of Technology, 2020-01-22) How, Jonathan P.
    Our work addresses the planning, control, and mapping issues for autonomous robot teams that operate in challenging, partially observable, dynamic environments with limited field-of-view sensors. In such scenarios, individual robots need to be able to plan/execute safe paths on short timescales to avoid imminent collisions. Performance can be improved by planning beyond the robots’ immediate sensing horizon using high-level semantic descriptions of the environment. For mapping on longer timescales, the agents must also be able to align and fuse imperfect and partial observations to construct a consistent and unified representation of the environment. Furthermore, these tasks must be done autonomously onboard, which typically adds significant complexity to the system. This talk will highlight four recently developed solutions to these challenges that have been implemented to (1) robustly plan paths and demonstrate high-speed agile flight of a quadrotor in unknown, cluttered environments; (2) certify safety in learning-based methods in presence of perturbation in observations; (3) plan beyond the line-of-sight by utilizing the learned context within the local vicinity, with applications in last-mile delivery; and (4) correctly synchronize partial and noisy representations and fuse maps acquired by (single or multiple) robots using a multi-way data association algorithm.
  • Item
    Learning from the Field: Physically-based Deep Learning to Advance Robot Vision in Natural Environments
    (Georgia Institute of Technology, 2020-01-08) Skinner, Katherine
    Field robotics refers to the deployment of robots and autonomous systems in unstructured or dynamic environments across air, land, sea, and space. Robust sensing and perception can enable these systems to perform tasks such as long-term environmental monitoring, mapping of unexplored terrain, and safe operation in remote or hazardous environments. In recent years, deep learning has led to impressive advances in robotic perception. However, state-of-the-art methods still rely on gathering large datasets with hand-annotated labels for network training. For many applications across field robotics, dynamic environmental conditions or operational challenges hinder efforts to collect and manually label large training sets that are representative of all possible environmental conditions a robot might encounter. This limits the performance and generalizability of existing learning-based approaches for robot vision in field applications. In this talk, I will discuss my work to develop approaches for unsupervised learning to advance perceptual capabilities of robots in underwater environments. The underwater domain presents unique environmental conditions to robotic systems that exacerbate the challenges in perception for field robotics. To address these challenges, I leverage physics-based models and cross-disciplinary knowledge about the physical environment and the data collection process to provide constraints that relax the need for ground truth labels. This leads to a hybrid model-based, data-driven solution. I will also present work that relates this framework to challenges for autonomous vehicles in other domains.
  • Item
    Toward Dynamic, Tactical, Remote Robotic Ops: Active Perception and Other Key Technologies
    (Georgia Institute of Technology, 2019-11-13) Buerger, Stephen
    Dynamic, tactical, remote operations in which unmanned systems must manage multiple, changing objectives in uncertain, evolving and potentially adversarial environments, without the benefit of prior scripting, present an extreme challenge that necessitates high levels of autonomy and physical capability. While robots’ ability to geometrically map and autonomously navigate environments is relatively mature, to achieve higher-level operational goals requires the further technical leap of abstractly, or semantically, understanding surroundings. As biological systems have learned, efficient abstract perception requires not only that observations be intelligently processed over time, but also that sensors be actively controlled to acquire the best knowledge that minimizes uncertainties. The challenges and results of several ongoing projects in “active perception” will be discussed. These include work in which interior environments are rapidly mapped with both geometric and semantic information, as well as work in which threats are detected, localized, distinguished from false alarms, and identified via autonomous sensor control and real-time object classification. The talk will also describe R&D underway at Sandia in other technology areas essential to dynamic, remote operations, including novel robotic mobility systems capable of providing the obstacle traversal and energy efficiency needed for challenging real-world operations, novel robotic manipulation approaches, and real-time control applications for specific effectors including rock-drilling systems.
  • Item
    Robophysics: Physics Meets Robotics
    (Georgia Institute of Technology, 2019-10-30) Goldman, Daniel I.
    Robots will soon move from the factory floor and into our lives (e.g., autonomous cars, package delivery drones, and search-and-rescue devices). However, compared to living systems, robot capabilities in complex environments are limited. I believe the mindset and tools of physics can help facilitate the creation of robust self-propelled autonomous systems. This “robophysics” approach – the systematic search for novel dynamics and principles in robotic systems – can aid the computer science and engineering approaches that have proven successful in less complex environments. The rapidly decreasing cost of constructing sophisticated robot models with easy access to significant computational power bodes well for such interactions. Drawing from examples in the work of my group and our collaborators, I will discuss how robophysical studies have inspired new physics questions in low dimensional dynamical systems (e.g., creation of analog quantum mechanics and gravity systems) and soft matter physics (e.g., emergent capabilities in ensembles of active “particles”). These studies have been useful to develop insight for biological locomotion in complex terrain (e.g., control targets via optimizing geometric phase) and have begun to aid engineers in the creation of devices that begin to achieve life-like locomotor abilities on and within complex environments (e.g., semi-soft myriapod robots).
  • Item
    Uniting Robots and Ultrasound for Cardiac Repair
    (Georgia Institute of Technology, 2019-10-09) Howe, Robert
    Minimally invasive techniques have revolutionized many areas of surgery, but heart surgery has seen limited progress. We are working to combine ultrasound imaging and robotic manipulation to enable cardiac procedures that minimize patient impacts. One robotic system automatically points ultrasound catheters. This four-DOF robotic system enables panoramic views of internal heart structures and automatically tracks catheters working within the beating heart during minimally invasive procedures. Another robotic system uses real-time 3D ultrasound imaging for dynamic visualization of internal cardiac anatomy through the opaque blood pool. We have developed image processing algorithms that can track tissue structures and surgical instruments in real time, despite poor resolution, acoustic artifacts, and data rates of over 30 million voxels per second. For manipulation of rapidly moving cardiac tissue we have created robotic catheters that can keep pace with fast-moving tissue. This allows the surgeon to interact with the heart as if it was stationary. In vivo validation of this technology in atrial septal defect closure and mitral valve annuloplasty procedures demonstrate the potential for improved patient outcomes.