IRIM Seminar Series
IRIM Seminar Series
Publication Search Results
Now showing 1 - 2 of 2
ItemData-to-Decisions for Safe Autonomous Flight(Georgia Institute of Technology, 2018-11-07) Atkins, Ella ; Georgia Institute of Technology. Institute for Robotics and Intelligent Machines ; Georgia Institute of Technology. Machine Learning ; University of Michigan. Department of Aerospace EngineeringTraditional sensor data can be augmented with new data sources such as roadmaps and geographical information system (GIS) Lidar/video to offer emerging unmanned aircraft systems (UAS) and urban air mobility (UAM) a new level of situational awareness. This presentation will summarize my group’s research to identify, process, and utilize GIS, map, and other real-time data sources during nominal and emergency flight planning. Specific efforts have utilized machine learning to automatically map flat rooftops as urban emergency landing sites, incorporate cell phone data into an occupancy map for risk-aware flight planning, and extend airspace geofencing into a framework capable of managing all traffic types in complex airspace and land-use environments. The presentation will end with videos illustrating recent work to experimentally validate the continuum deformation cooperative control strategy in the University of Michigan’s new M-Air netted flight facility.
ItemDeep Learning to Learn(Georgia Institute of Technology, 2018-08-20) Abbeel, Pieter ; Georgia Institute of Technology. Institute for Robotics and Intelligent Machines ; Georgia Institute of Technology. Machine Learning ; University of California, BerkeleyReinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In contrast, humans can pick up new skills far more quickly. To do so, humans might rely on a better learning algorithm or on a better prior (potentially learned from past experience), and likely on both. In this talk I will describe some recent work on meta-learning for action, where agents learn the imitation/reinforcement learning algorithms and learn the prior. This has enabled acquiring new skills from just a single demonstration or just a few trials. While designed for imitation and RL, our work is more generally applicable and also advanced the state of the art in standard few-shot classification benchmarks such as omniglot and mini-imagenet.