Series
IRIM Seminar Series

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    Robot Learning: Quo Vadis?
    (Georgia Institute of Technology, 2020-11-18) Peters, Jan
    Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent “hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.
  • Item
    Reinforcement Learning: Leveraging Deep Learning for Control
    (Georgia Institute of Technology, 2020-11-04) Buhr, Craig
    Reinforcement learning is getting a lot of attention lately. People are excited about its potential to solve complex problems in areas such as robotics and automated driving, where traditional control methods can be challenging to use. In addition to deep neural nets to represent the policy, reinforcement learning lends itself to control problems because its training incorporates repeated exploration of the environment. As such exploration is time-consuming and costly or dangerous when done with actual hardware, a simulation model is often used to represent the environment. In this talk, we provide an overview of reinforcement learning and its application to teaching a robot to walk. We discuss the differences between reinforcement learning and traditional control methods. Specific topics of reinforcement learning covered in this presentation include: • Creating environment models • Crafting effective reward functions • Deploying to embedded devices through automatic code generation for CPUs and GPUs
  • Item
    From Coexistence to Collaboration: Towards Reliable Collaborative Robots
    (Georgia Institute of Technology, 2020-10-14) Ravichandar, Harish
    The field of robotics has made incredible progress over the past several decades. Indeed, we have built impressive robots capable of performing complex and intricate tasks in a variety of domains. Most modern robots, however, passively coexist with humans while performing pre-specified tasks in predictable environments. As robots become an increasingly integral part of our everyday lives -- from factory floors to our living rooms -- it is imperative that we build robots that can reliably operate and actively collaborate in unstructured environments. This talk will present three key aspects of collaborative robotics that will help us make progress toward this goal. Specifically, we will discuss algorithmic techniques that enable robots to i) consistently and reliably perform manipulation tasks, ii) understand and predict the behavior of other agents involved, and iii) effectively collaborate with other robots and humans.
  • Item
    Star Wars: The Rise of Robots and Intelligent Machines
    (Georgia Institute of Technology, 2020-09-30) Gombolay, Matthew ; Mazumdar, Ellen Y. C. ; Yaszek, Lisa ; Young, Aaron
    A long time ago, in a galaxy far far away, a space opera movie captured the imaginations of roboticists, researchers, and writers from around the world. Over the last 43 years, Star Wars has had an immense impact on our collective perception of robotics. It has introduced some of the most beloved droids as well as one of the most feared cyborgs in science fiction. In this panel, we will discuss how the Star Wars movies have influenced the design of robots and intelligent machines, including prosthetics, cybernetics, and artificial intelligence. We will show examples of how George Lucas portrayed good and evil in different types of technology and how he depicted human-robot teaming. These illustrations have driven how we design and interact with technology to this day. Whether you love or love-to-hate the movies, these are the droids discussions that you are looking for!
  • Item
    Cost of Transport, the Correct Metric for Mobile Systems?
    (Georgia Institute of Technology, 2020-09-16) Mazumdar, Anirban ; Rouse, Elliott ; Sawicki, Gregory W. ; Young, Aaron ; Zhao, Ye
    Energetic cost of locomotion is often the gold standard measures used in autonomous robotic walking for efficiency as well as humans augmented with lower limb wearable robotics. The panel will discuss the relative benefits as well as critical disadvantages to the field’s obsession on energy cost for optimizing robotic systems and controls. Applications to clinical robotics for impaired populations, autonomous biped robotics, and wearable robotics for human augmentation will be discussed. The panel will also discuss potential alternative measures beyond energy cost to assess locomotion systems such as those associated with stability and agility.
  • Item
    Navigation and Mapping for Robot Teams in Uncertain Environments
    (Georgia Institute of Technology, 2020-01-22) How, Jonathan P.
    Our work addresses the planning, control, and mapping issues for autonomous robot teams that operate in challenging, partially observable, dynamic environments with limited field-of-view sensors. In such scenarios, individual robots need to be able to plan/execute safe paths on short timescales to avoid imminent collisions. Performance can be improved by planning beyond the robots’ immediate sensing horizon using high-level semantic descriptions of the environment. For mapping on longer timescales, the agents must also be able to align and fuse imperfect and partial observations to construct a consistent and unified representation of the environment. Furthermore, these tasks must be done autonomously onboard, which typically adds significant complexity to the system. This talk will highlight four recently developed solutions to these challenges that have been implemented to (1) robustly plan paths and demonstrate high-speed agile flight of a quadrotor in unknown, cluttered environments; (2) certify safety in learning-based methods in presence of perturbation in observations; (3) plan beyond the line-of-sight by utilizing the learned context within the local vicinity, with applications in last-mile delivery; and (4) correctly synchronize partial and noisy representations and fuse maps acquired by (single or multiple) robots using a multi-way data association algorithm.
  • Item
    Learning from the Field: Physically-based Deep Learning to Advance Robot Vision in Natural Environments
    (Georgia Institute of Technology, 2020-01-08) Skinner, Katherine
    Field robotics refers to the deployment of robots and autonomous systems in unstructured or dynamic environments across air, land, sea, and space. Robust sensing and perception can enable these systems to perform tasks such as long-term environmental monitoring, mapping of unexplored terrain, and safe operation in remote or hazardous environments. In recent years, deep learning has led to impressive advances in robotic perception. However, state-of-the-art methods still rely on gathering large datasets with hand-annotated labels for network training. For many applications across field robotics, dynamic environmental conditions or operational challenges hinder efforts to collect and manually label large training sets that are representative of all possible environmental conditions a robot might encounter. This limits the performance and generalizability of existing learning-based approaches for robot vision in field applications. In this talk, I will discuss my work to develop approaches for unsupervised learning to advance perceptual capabilities of robots in underwater environments. The underwater domain presents unique environmental conditions to robotic systems that exacerbate the challenges in perception for field robotics. To address these challenges, I leverage physics-based models and cross-disciplinary knowledge about the physical environment and the data collection process to provide constraints that relax the need for ground truth labels. This leads to a hybrid model-based, data-driven solution. I will also present work that relates this framework to challenges for autonomous vehicles in other domains.