This talk, Stephen Melco. Stephen is going to talk about robots. Thanks. I'm going to give a talk that's going to build on, I think a lot of the themes that you've seen throughout the day I talked to Seth about doing here is giving an overview of some of the things we're doing in the lab. And sort a forward looking vision of where we're trying to go with this. If we look at traditional robotics, really they've been designed to work in structured environments with what I'm calling a high precision world requirement. What I mean by this is that everything that these robots are working with is very well fixtured. We know precisely the process that we want these robots to do. They do repeated tasks. There's low variance on these tasks. And a good example of this is in the automotive industry, where we see either welding or paint. As we're moving forward in robotics, we're getting to what I'm calling here advanced robotics. And these are really semi structured worlds. We can have low precision worlds that are allowed to operate in these areas. We can have imprecise sensing, and by this I'm saying error is greater than a millimeter or so. We can exercise limited a priori patterns. And a good example is one of the systems that we just saw with the A sorting and pallette building, right? We don't have to have millimeter precision in order to build these pallets. And we don't have to have a very precise notion of the exact world and tasking ahead of time, we can do various configurations on these systems. But where we have issues with today's robots are where we have a semi structured environment with a high precision world requirement. Right? And what I'm saying here is a world requirement where we have sensing that we have to have errors less than a millimeter or around a millimeter. And we also have unlimited task variety and task variant. So every run that we do is potentially different on these robotic systems. And examples of this include things like assembly, where here you're just seeing an electrical project board where we have to plug various connectors in. And also agriculture, where here's an example of a chicken deboning robot, where every chicken that this system sees is slightly different, right? And we need very high precision in all of these systems. Well, in these systems, if we can understand or sense to high enough precision, right? And if we can plan and control to high enough precision, then we can be really successful in these operations. The problem with this though, is that sensing and plan and control to this level is very difficult to achieve, right? So it's hard to program these systems, to have this kind of control and sensing, because this is hard to program. It can take a long time to get these robots to be able to operate in this fashion. And it becomes very risky to have them in these environments. And in fact, they may never work with the constraints of time and funding that we have on trying to get these processes to work. And even once we get these things to work, if we have a different task, and this task may even be in the same family of tasks that we just program this robot to do. We have to start over and reprogram the robot. Refigure out how everything we're going to do, right? So what I'm basically saying is that when we can't control or sense the required resolution, there's really this mismatch between this high fidelity, high accuracy task and our low fidelity understanding of the world. But what's interesting about this is that humans cope with this every day, right? As a human, you can look at a world with low precision vision and you fill in the missing information with apriori knowledge. And with reasoning about the world, humans use multi sensor input in order to guide their actions. You see here this robot plugging in a connector. A human can do this with their eyes closed, right? You don't even need to have vision to do that. Humans adapt a plan as they're executing it. As a human, if I'm trying to plug in an HDMI cable, I make a decision when it's not going in. I either jam it in harder because I think that I'm doing everything right and it's just a tight fit. Or I come to the understanding that I'm 180 degrees out of phase with the way that it needs to be plugged in. And I rotate it around and I plug it in a GTRI. We're trying to create a pipeline that mirrors this kind of human interaction in the world with a, with a robotic system. We have several different projects, and I'm just going to really briefly step through and show you a fun video for each of these without any technical details at all. But if you want technical details, I'm happy to talk afterwards or point you to the right people to get technical details, right? So the first part of this is the vision system, right? And we've come up with an innovative conical mapping approach for a vision processing where we actually scan in an object using an Intel real sense or other kind of depth camera. And we generalize the model that we create to that so that we can map visual pixels onto that model. From that, we are able to then visually look at these objects and map the exact pose of the object on top of the model. Because it's a model, the model can have various components. So we can even have moving parts on this model. For instance, we can look at a chicken whose wings are hinged components of that model. And we can even have deformations of the model. This is a plush toy and you can see the two robots playing with it. Handing it from one to the other, no matter how you position this on the table. The next component of this are the controls. We heard before about behavior trees and utilizing behavior trees for controls, we use that exact framework in doing this. The unique thing that we have done with our behavior trees is that we have coupled the low level skills in our behavior tree, rather than using just a blackboard structure. We actually couple this into a dynamic database. Because of that, we can have either learned parameters that are going into the skills that are stored in the database or a prior information that's stored in the database that goes in to help these skills. As you saw in the video before of the robot plugging in a connector. That plug in the connector is a set of skills in a behavior tree that runs that Insert. Behavior tree is the exact same behavior tree, no matter what connector we're plugging in, the thing that changes in that or the parameters that are in the database for the individual type of connector that we're actually inserting by learning the correct forces and torques and search trajectories for those connectors. Or by specifying them by hand and putting them into the database. We can use the same exact behavior tree for many different types of connectors. We've also developed some skills such as this is a 14 off system where the two robots are working in conjunction to do compliant control, to do things like removing a lid. And what we're trying to do with this is to add more flexibility in the system. Can we have two robots that cooperate when they need to cooperate and do their own independent tasking? When they can do their own independent tasking. And in this case, you're just seeing them maneuver a bulky lid out of the way. Then they'll do wiring operations independently once the lid is removed on the box. Now one of the things that we've realized though is no matter how good we have been able to make these systems, we can't get them to 100% proficiency. And I think there was a statistic of 98, 99% proficient on one of the robots in the top before. Which sounds really good, but if you're doing thousands of operations a day, you're still going to have lots of failures during the day. Well, how can we cope with these failures? Right? One way that we can cope with it is by anticipated failures can be coped within our behavior trees. And we can have fallbacks that try to correct for these. But we found that we're always going to have failures that are not anticipated and that we're not going to be able to cope with. But many of these, the robot understands that something went wrong. And if the robot is able to understand that something went wrong, then the robot can wire back to a human and ask the human for help. What we've been trying to do is have a virtual reality environment where when the robot needs help, it calls back to the human. It sends the human a snapshot of its world. And the human doesn't do tell operation but instead the human does tasking for the robot. Here the robot ran into a situation where it saw that a wire was blocking the module that it needed to operate with, called back to the human. The human goes into the virtual scene and draws a trajectory and puts that into the database and then uses a behavior tree to execute a trajectory on that robot. We're not tele, operating the system, We're simply inserting a new node into the behavior tree for the extraction of this module. To move the wire out of the way, we're using the trajectory that the human input into the behavior, the knowledge driven robotics which we are talking about, what is the future of it? This is what we call our overall architecture. That is really trying to integrate all of this together into a single system. We're taking symbolic planning following that into robotic behaviors that are coupled with a knowledge base. And this is in the form of behavior trees. The human collaborator can operate in the virtual reality world, interfacing also with these behavior trees. And it flows down through a Ross infrastructure, all the way down to to the hardware layer. We've implemented this on our robot cell doing electrical assembly, and here you can see four consecutive runs. The robot doing the same assembly procedure. It's doing various connectors and using various tooling in order to unfasten and refasten modules and plug in wires. What you notice is that the four videos diverge. And the reason that those four videos diverge is because it's non deterministic on some of these activities. Because we don't have precise vision systems and precise knowledge of where everything is. We're using various search patterns in order to find the mating positions of the connectors. And it takes different periods of time depending on how we happen to line up. But you can see we're successful in all four videos and able to complete these modules. We're working now on taking this framework and moving it into mobile manipulation. And we're putting this on both a wheeled platform and also a legged platform. And we're hoping to be able to take this and go even further with it. With that, I'd just like to thank a lot of the folks that have helped to make this possible and also various funding agencies that have contributed to this work. Thank you. Time for questions. The person that inserts the noted zone number deferred to tort. Go ahead. I think I do many special job here. Absolutely. Yes. So much synergy, so much alignment. What I was that's what I'm saying. You took half of my talk behavior. We're looking at this in several different areas. We actually have a project with Nasa for space habitats. And we're looking at maintenance of the space habitat. When nobody's home, can we have robots that are able to do maintenance operations and repair operations? The idea there is that the robots have to have the set of skills to enable them to do this, but we'll have to figure out what they need to do when something breaks. We anticipate that they will not be able to do everything. And we're going to have to be able to control these robots over a high latency and low bandwidth link, so we can't tellly operate the robot. But if we have a virtual representation in a digital twin of the system, we can task the robot in that system, validate that it'll work, and then send the tasking to the robot. That's why we're trying to integrate the VR systems with the behavior trees and the tasking and all of that. That's one area. We're also looking at this in agriculture, for doing various poultry processing operations where we're using the VR systems to gather training data. We do a task using the VR systems until we have enough training data that we can automate it. And then we move on to the next hardest task. And we keep iterating in that fashion. There are absolutely real word applications for this data. Oh, absolutely. I'm just telling you what we're currently working on that. Yeah, yeah, I'm happy to talk to you about it. Problem down. Operate behavior as well. We don't create nodes but existing node. Yeah. Anybody else? Put it on the Love Fest I love to work with. Thank you. Question person who inserts the node into the behavior that person needs? A Bachelor's, Master's, a Phd, just domain experience. So the idea is that for most things they don't even have to insert a node. They can just insert data into the database and then command the behavior tree. With our behavior trees, take all of the arguments come out of the database, so they just have to set up the node to fire. The goal is to have somebody who's domain expertise be able to do that without having any sort robot programming. Now if you have to redesign the behavior tree, because something really like on the space station happens, that's going to take more expertise. Thank you, Stephen. Mm hmm.