Series
IRIM Seminar Series

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 10 of 40
  • Item
    Multirobot Coordination: From High-level Specification to Correct Execution
    ( 2015-12-02) Ayanian, Nora
    Using a group of robots in place of a single complex robot to accomplish a task has many benefits, including simplified system repair, less down time, and lower cost. Combining heterogeneous groups of these multi-robot systems allows addressing multiple subtasks in parallel, reducing the time it takes to address many problems, such as search and rescue, reconnaissance, and mine detection. These missions demand different roles for robots, necessitating a strategy for coordinated autonomy while respecting any constraints the environment may impose. Synthesis of control policies for heterogeneous multirobot systems is particularly challenging because of inter-robot constraints such as communication maintenance and collision avoidance, the need to coordinate robots within groups, and the dynamics of individual robots. I will present approaches to synthesizing feedback policies for navigating groups of robots in constrained environments. These approaches automatically and concurrently solve both the path planning and control synthesis problems, and are specified at a high level, for example, using an iPad interface to navigate a complex environment with a team of UAVs. I will also present some preliminary work on novel approaches to developing controllers for many types of multirobot tasks, by using crowdsourced multi-player game data.
  • Item
    Words, Pictures, and Common Sense
    ( 2015-12-01) Parikh, Devi
    As computer vision and natural language processing techniques are maturing, there is heightened activity in exploring the connection between images and language. In this talk, I will present several recent and ongoing projects in my lab that take a new perspective on problems like automatic image captioning, which are receiving a lot of attention lately. In particular, I will start by describing a new methodology for evaluating image-captioning approaches. I will then discuss image specificity — a concept capturing the phenomenon that some images are specific and elicit consistent descriptions from people, while other images are ambiguous and elicit a wider variety of descriptions from different people. Rather than think of this variance as noise, we model this as a signal. We demonstrate that modeling image specificity results in improved performance in applications such as text-based image retrieval. I will then talk about our work on leveraging visual common sense for seemingly non-visual tasks such as textual fill-in-the-blanks or paraphrasing. We propose imagining the scene behind the text to solve these problems. The imagination need not be photorealistic; so we imagine the scene as a visual abstraction using clipart. We show that jointly reasoning about the imagined scene and the text results in improved performance of these textual tasks than reasoning about the text alone. Finally, I will introduce a new task that pushes the understanding of language and vision beyond automatic image captioning — visual question answering (VQA). Not only does it involve computer vision and natural language processing, doing well at this task will require the machine to reason about visual and non-visual common sense, as well as factual knowledge bases. More importantly, it will require the machine to know when to tap which source of information. I will describe our ongoing efforts at collecting a first-of-its-kind, large VQA dataset that will enable the community to explore this rich, challenging, and fascinating task, which pushes the frontier towards truly AI-complete problems.
  • Item
    Multi‐Robot Systems for Monitoring and Controlling Large Scale Environments
    (Georgia Institute of Technology, 2015-04-22) Schwager, Mac
    Groups of aerial, ground, and sea robots working collaboratively have the potential to transform the way we sense and interact with our environment at large scales. They can serve as eyes‐in‐the‐sky for environmental scientists, farmers, and law enforcement agencies, providing critical, real‐time information about dynamic environments and cityscapes. They can even help us to control large‐scale environmental processes, autonomously cleaning up oil spills, tending to the needs of crop lands, and fighting forest fires, while humans stay at a safe distance. This talk will present an overview of research toward the realization of this vision, giving special attention to recent work on distributed optimization‐based control algorithms for groups of aerial robots to monitor large‐scale environments. I will describe a general optimization‐based control design methodology for synthesizing practical, distributed robot controllers with provable stability and convergence properties. I will also describe low‐level control techniques based on differential flatness to coordinate the motion of teams of multirotor helicopters in an agile and computationally efficient manner. Experimental studies with groups of multirotor robots flying both outdoors and indoors using these controllers will also be discussed.
  • Item
    Risky Robotics: Developing a Practical Solution for Stochastic Optimal Control
    (Georgia Institute of Technology, 2015-04-15) Rogers, Jonathan
    Risk is a ubiquitous aspect of control and path planning for robots operating in unstructured real‐world environments. Nevertheless, humans still far surpass robots in their ability to evaluate complex tradeoffs under uncertainty through risk analysis and subsequent decision‐making. Many traditional approaches to the stochastic optimal control problem, such as Partially Observable Markov Decision Processes (POMDP’s), suffer from the curse of dimensionality and become computationally intractable in many real-world scenarios. In this seminar, a new class of stochastic control algorithms is proposed that makes use of emerging high‐performance computing devices, specifically GPUs, to perform real‐time uncertainty quantification (UQ) as part of a feedback control loop. These algorithms propagate the time‐varying probability density of the robot state and optimize control actions with respect to accuracy, obstacle avoidance, and other criteria. Key to practical implementation of these algorithms is the fact that many UQ algorithms can be parallelized; thus they can leverage emerging embedded high‐throughput devices for real‐time or near real‐time execution. Following an overview of the general formulation of these stochastic control algorithms, examples are provided in the form of autonomous parafoil and quadrotor flight controllers that make use of real‐time uncertainty analysis for obstacle avoidance in constrained environments. Recent experimental flight tests using embedded GPUs show that a strong coupling between UQ and optimal control offers a practical solution for risk mitigation by autonomous systems.
  • Item
    Humanoids of the Future
    ( 2015-03-25) Sentis, Luis
    As the world’s population lives longer, human-centered robotics emerges as a solution for the assistance, augmentation, and representation of humans to their comfort, productivity, and health. In this context, the Human Centered Robotics Lab studies key problems in mobility and manipulation of humanoid robots. In the first part of the talk, I will describe our work with the Office of Naval Research on endowing agile and compliant physical capabilities to bipedal robots. In particular, I will focus on the performance analysis of the whole-body operational space control framework during locomotion. In the second part of the talk, I will describe our work with NASA on physical human-robot interaction involving collisions between humans and mobile platforms and cooperative behaviors in rough terrains. Finally, I will comment on our work on building high performance actuators and software middleware for NASA’s Valkyrie humanoid robot.
  • Item
    The Robotic Scientist: Automating Discovery, from Cognitive Robotics to Computational Biology
    ( 2015-03-04) Lipson, Hod
    Can robots discover scientific laws automatically? Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. This talk will outline a series of recent research projects—starting with self-reflecting robotic systems and ending with machines that can formulate hypotheses, design experiments, and interpret the results—to discover new scientific laws. We will see examples from psychology to cosmology, from classical physics to modern physics, from big science to small science.
  • Item
    Efficient Lifelong Machine Learning
    ( 2015-02-11) Eaton, Eric R.
    Lifelong learning is a key characteristic of human intelligence, largely responsible for the variety and complexity of our behavior. This process allows us to rapidly learn new skills by building upon and continually refining our learned knowledge over a lifetime of experience. Incorporating these abilities into machine learning algorithms remains a mostly unsolved problem, but one that is essential for the development of versatile autonomous systems. In this talk, I will present our recent progress in developing algorithms for lifelong machine learning. These algorithms acquire knowledge incrementally over consecutive learning tasks, and then transfer that knowledge to rapidly learn to solve new problems. Our approach is highly efficient, scaling to large numbers of tasks and amounts of data, and provides a variety of theoretical guarantees on performance and convergence. I will show that our lifelong learning system achieves state-of-the-art results in multi-task learning for classification and regression on a variety of domains, including facial expression recognition, landmine detection, and student examination score prediction. I will also describe how lifelong learning can be applied to sequential decision making for robotics, demonstrating accelerated learning for optimal control on several dynamical systems, including an application to quadrotor control. Finally, I will discuss our work toward autonomous cross-domain transfer, enabling knowledge to be automatically transferred between different task domains.
  • Item
    Turning Assistive Machines into Assistive Robots
    ( 2015-01-21) Argall, Brenna D.
    For decades, the potential for automation—in particular, in the form of smart wheelchairs—to aid those with motor or cognitive impairments, has been recognized. It is a paradox that often the more severe a person's motor impairment, the more challenging it is for them to operate the very assistive machines that might enhance their quality of life. A primary aim of my lab is to address this confound by incorporating robotics autonomy and intelligence into assistive machines—turning the machine into a kind of robot and offloading some of the control burden from the user. Robots already synthetically sense, act in and reason about the world, and these technologies can be leveraged to help bridge the gap left by sensory, motor, or cognitive impairments of the users of assistive machines. This talk will provide an overview of some of the ongoing projects in my lab, which strives to advance human ability through robotics autonomy.
  • Item
    Transformations and Frontiers in Robot Motion and Manipulation
    ( 2015-01-14) Berenson, Dmitry
    Robotics is undergoing three transformations, which are changing our research focus and opening doors to new applications. The need for robotic manipulation in unstructured environments, human-robot collaborative systems, and handling soft materials is transforming the fundamental assumptions underlying our methods for manipulation planning and creating new opportunities for applications in service robotics, health care, and manufacturing. Berenson presents his team's contributions to these transformations, which include new algorithms that plan motion with multiple simultaneous constraints, manage sensor uncertainty in the planning process, model human motion in collaborative settings, and control manipulation of soft objects. He discusses the theory behind these approaches and show practical applications on real-world robots. He ends by identifying three frontiers that are emerging as a result of these transformations as well as prospects for their exploration.
  • Item
    Dynamic Animation and Robotics Toolkit
    ( 2014-11-12) Liu, Karen
    Designing control algorithms for complex dynamic systems is a challenging and time consuming process. It often requires deriving nonlinear differential equations, formulating optimization problems, and solving numerous small, but tedious, problems, such as inverse kinematics, forward simulation, inverse dynamics, or Jacobian matrix computation. To streamline the process of controller design, we introduced an open-source, cross-platform toolkit, called DART, for rapid development of kinematics and dynamics applications in computer animation and robotics. DART (Dynamic Animation and Robotics Toolkit), one of the default physics engines in Gazebo, provides seamless integration with robotic simulators in the ROS environment. In contrast to many popular physics engines that view the simulator as a black box, DART gives full access to internal kinematic and dynamic quantities, such as the mass matrix, Coriolis and centrifugal forces, and transformation matrices and their derivatives. DART also provides efficient computation of Jacobian matrices for arbitrary body points and coordinate frames. In this talk, I will give an introduction to DART and demonstrate how complicated problems can be implemented using only a few lines of code in DART. I will also show how we use DART to make an Atlas robot walk, a Shadow Hand manipulate objects, a virtual human learn gymnastics, and a variety of aquatic creatures swim in simulated fluid.