Series
IRIM Seminar Series

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 10 of 136
  • Item
    Control Principles for Robot Learning
    (Georgia Institute of Technology, 2024-02-07) Murphey, Todd
    Embodied learning systems rely on motion synthesis to enable efficient and flexible learning during continuous online deployment. Motion motivated by learning needs can be found throughout natural systems, yet there is surprisingly little known about synthesizing motion to support learning for robotic systems. Learning goals create a distinct set of control-oriented challenges, including how to choose measures as objectives, synthesize real-time control based on these objectives, impose physics-oriented constraints on learning, and produce analyses that guarantee performance and safety with limited knowledge. In this talk, I will discuss learning tasks that robots encounter, measures for information content of observations, and algorithms for generating action plans. Examples from biology and robotics will be used throughout the talk and I will conclude with future challenges.
  • Item
    Robotic Locomotion and Sensing on Deformable Terrains
    (Georgia Institute of Technology, 2024-01-24) Qian, Feifei
    Achieving robust mobility on natural and deformable terrains is pivotal for robots to operate effectively in real-world scenarios. Despite remarkable progress in robotics hardware and software, today’s robots still face challenges in traversing terrains like sand dunes, soft snow, and sticky mud, significantly trailing behind the locomotion abilities of animals and humans. This gap limits robots’ capabilities to aid in critical missions such as earthquake search and rescue, supply delivery, and planetary exploration. This talk discusses our recent efforts to bridge this gap. First, we show that by understanding the force responses from deformable terrains, we could allow robots to elicit desired ground reaction forces from challenging terrains like sand and mud and produce significantly improved locomotion performance. Second, we show that by leveraging the high force transparency of direct-drive actuators, robots could use their legs as proprioceptive sensors to determine substrate strength and mechanical properties. This proprioceptive sensing capability can enable robots to gather rich information from their environment during every step, and adapt their locomotion strategies accordingly. Finally, we discuss our latest progress in applying these locomotion and sensing strategies in earth and planetary exploration scenarios, and how the improved sensing and locomotion capabilities pave the way for new human-robot teaming workflows.
  • Item
    Autonomous and human-collaborative robotic manipulation
    (Georgia Institute of Technology, 2024-01-10) Lynch, Kevin
    Research at the Center for Robotics and Biosystems at Northwestern University includes bio-inspiration, neuromechanics, human-machine systems, and swarm robotics, among other topics. In this talk I will focus on our work on manipulation, including autonomous in-hand robotic manipulation and safe, intuitive human-collaborative manipulation among one or more humans and a team of mobile manipulators.
  • Item
    Do We Really Need all that Data? Learning and Control for Contact-rich Manipulation
    (Georgia Institute of Technology, 2023-11-15) Posa, Michael
    For all the promise of big-data machine learning, what will happen when robots deploy to our homes and workplaces and inevitably encounter new objects, new tasks, and new environments? If a solution to every problem cannot be pre-trained, then robots will need to adapt to this novelty. Can a robot, instead, spend a few seconds to a few minutes gathering information and then accomplish a complex task? Why does it seem that so much data is required, anyway? I will first argue that the hybrid or contact-driven aspects of manipulation clashes with the inductive biases inherent in standard learning methods, driving this current need for large data. I will then show how contact-inspired implicit learning, embedding convex optimization, can reshape the loss landscape and enable more accurate training, better generalization, and ultimately data efficiency. Finally, I will present our latest results on how these learned models can be deployed via real-time multi-contact MPC for robotic manipulation.
  • Item
    Dynamic Legged LocoManipulation: Balancing Reinforcement Learning with Model-Based Control
    ( 2023-04-12) Sreenath, Koushil
    Model-based control methods such as control Lyapunov and control barrier functions can provide formal guarantees of stability and safety for dynamic legged locomotion, given a precise model of the system. In contrast, learning-based approaches such as reinforcement learning have demonstrated remarkable robustness and adaptability to model uncertainty in achieving quadrupedal locomotion. However, reinforcement learning based policies lack formal guarantees, which is a known limitation. In this presentation, I will demonstrate that simple techniques from nonlinear control theory can be employed to establish formal stability guarantees for reinforcement learning policies. Moreover, I will illustrate the potential of reinforcement learning for more complex bipedal and humanoid robots, as well as for loco-manipulation tasks that entail both locomotion and manipulation. This brings up an intriguing question: Is reinforcement learning alone sufficient for achieving optimal results in dynamic legged locomotion, or is there still a need for model-based control methods?
  • Item
    Towards Human-Friendly Robots: We need more Robots at Home
    (Georgia Institute of Technology, 2023-03-08) Kim, Joohyung
    The demand for robots that can work in close proximity and interact physically with humans has been increasing. Currently, there are robots in airports, restaurants, and amusement parks that guide, serve, and entertain people. However, despite technological advancements, there are still very few robotic applications that meet the public’s expectations. To make robots more helpful in our daily lives, we need a better understanding of human environments and tasks, better methods for robots to perform tasks, and better designs for robots to interact with humans naturally and safely. In this presentation, I will share my experience and work on designing human-friendly robots through robot design, motion control, and human-robot interaction. Additionally, I will introduce KIMLAB’s recent approach to designing and implementing robots for home use.
  • Item
    Sort robots for humanity
    (Georgia Institute of Technology, 2023-03-01) Okamura, Allison M.
    Traditional robotic manipulators are constructed from rigid links and localized joints, which enables large forces and workspaces but creates challenges for safe and comfortable interaction with the human body. In contrast, many soft robots have a volumetric form factor and continuous bending that allows them to mechanically adapt to their environment — but these same mechanical properties can hinder forceful interactions required for physical assistance and feedback to humans. This talk will examine robotic systems and haptic devices that achieve the best of both worlds by leveraging softness and rigidity to enable novel shape control, generate significant interaction forces, and provide a compliant interface to the human body.
  • Item
    Making Large Dimensional Problems Small Again
    (Georgia Institute of Technology, 2023-02-22) Choset, Howie
    Motion is all around us. Motion is particularly interesting when it has many degrees of freedom. This talk covers the design, sensing, and planning for snake, multi-agent and modular robot high DOF systems. Thus far, each system requires different fundamentals – geometric mechanics for snake robot locomotion, deferred planning and ergodic search for multi-agent systems, and novel generator and discriminator networks for modular robots – which will be covered in this talk. While no grand unifying theory combines these approaches, they all share one aspect in common: reduce complex high dimensional problems into low dimensional ones. In pursuit of this investigation in reduction, my group has created several embedded systems - actuators and edge sensors - to build and deploy robots that stress-tests the core assumptions in the theory and demonstrates efficacy for applications of national importance. These applications include minimally invasive surgery, urban search and rescue, manufacturing, assembly in low-Earth orbit, maintenance of municipal infrastructure, and agile recycling. This talk will discuss these confined space applications, and if time permits, the five spin off companies, and one manufacturing institute, that my colleagues and I co-founded to commercialize the core technologies covered in this talk.
  • Item
    The Right Stuff: Representing Safety to Get Robots Out in the Real World
    (Georgia Institute of Technology, 2023-02-08) Kousik, Shreyas
    Autonomous robots have the incredible potential to aid people by taking on difficult tasks and working alongside us. However, it will be difficult to trust robots in widespread deployment without knowing when they are safe. Safety can often be expressed theoretically yet suffer an imperfect translation into numerical representation. My research focuses on this gap: what are the right representations of robot safety to bridge theory and real-world deployment? For this talk, I focus on safety in collision avoidance for robot motion planning. In particular, I present Reachability-based Trajectory Design (RTD), a framework that unites theory and representation for real-time, safe robot motion planning. RTD’s foundation in theory makes it applicable to a wide variety of systems, including self-driving cars, quadrotor drones, and manipulator arms. In practice, over thousands of simulations and dozens of hardware trials, RTD has resulted in no collisions while outperforming other methods, establishing a new state of the art. My future work extends from this paradigm to enable robots to learn and adapt their own notions of safety in three ways: online adaptive dynamic model identification for safe motion planning, robust perception that is targeted towards safe control, and co-design of a robot’s perception, planning, and control algorithms to reduce overly cautious robot behavior without losing safety guarantees. In each of these future directions I seek to create and deploy the right representations to transfer theory onto hardware, to make robots do more amazing things safely.
  • Item
    Combining Learning and Control in Cyber-Physical Systems
    (Georgia Institute of Technology, 2023-01-25) Malikopoulos, Andreas
    Cyber-physical systems (CPS), in most instances, represent systems of subsystems with an informationally decentralized structure. To derive optimal control strategies for such systems, we typically assume an ideal model, e.g., controlled transition kernel. Such model-based control approaches cannot effectively facilitate optimal solutions with performance guarantees due to the discrepancy between the model and the actual CPS. On the other hand, in most CPS there is a large volume of data of a dynamic nature which is added to the system gradually in real time and not altogether in advance. Thus, traditional supervised learning approaches cannot always facilitate robust solutions using data derived offline. By contrast, applying reinforcement learning approaches directly to the actual CPS might impose significant implications on safety and robust operation of the system. In this talk, I will present a theoretical framework founded at the intersection of control theory and learning that circumvents these challenges in deriving optimal strategies for CPS. In this framework, we aim at identifying a sufficient information state for the CPS that takes values in a time-invariant space, and use this information state to derive separated control strategies. Separated control strategies are related to the concept of separation between the estimation of the information state and control of the system. By establishing separated control strategies, we can derive offline the optimal control strategy of the system with respect to the information state, which might not be precisely known due to model uncertainties or complexity of the system, and then use learning methods to learn the information state online while data are added gradually to the system in real time. This approach could effectively facilitate optimal solutions with performance guarantees in a wide range of CPS applications such as emerging mobility systems, networked control systems, smart power grids, cooperative cyber-physical networks, cooperation of robots, and internet of things.