So you've seen a few GTI members already. Alexis Noel, but my name is Nate Damen. I'm a researcher at GRI doing a variety of different things. But in this presentation, I'll be going over our dual arm compliant controller. So in a lot of robotics, you have, if you've ever heard of the pagan whole challenge or operation in robotics, it's simply to pick up an object in slotted into some hole where slot, you can think of it very similar to just grabbing a USB, plugging it into your PC. Well, into robotics, that is actually a pretty difficult challenge. Most of the time. It deals with not having enough sensing capabilities. That's why you saw a presentation again on surface contact. You see other planning agents, whether it be ai or just actual controls. So all of these things come into play. And especially when you get into having higher degrees of freedom. So things in this setup, we have two robotic arms, both on a rail. One has a standard kinda clog gripper and the other one actually has a screwdriver attachment to it. And the idea for complaint control is that not just a human can actually interact, but the compliance comes into actually kind of get that pegging whole. How do you actually get the USB into the slot? How does the robot find the slot and nowhere it is? A lot of times in traditional industry. For automation, robotics, all of that is just hardcoded values like everything inherent to the location. Not just like where your where the pick-up spot is, where the entry point is, any sort of compliant motion That's all usually baked in to normal automation tasks. In our case, we try to separate those out and actually allow the robot to figure out what those need to be in how to do it. So I have a video that I'll just start playing. And I'm also just going to put this up here. So you can see in this video it's sped up, but obviously we have a single arm, not on the dual arm, but this robot is actually going through a bunch of compliant operations. So we're having it pick up different pieces, sliding it over. One of the things I'm actually going to highlight more is a coaxial cable. So if you ever deal with coaxial cables, this is a locking mechanism. And we didn't necessarily train the robot on where. Well, I'll get into the architecture a little bit later. But the robot basically only knows roughly where that coax cable begins in a rough position to where it should end up. On the bottom. I have more of our actual classic controls where we focus on more force torque. In this case, our robot is really guided by force. We're gonna be giving it force commands. We're gonna be looking at force feedback. And how we actually say we've done the object is going to be slightly more open-loop, but it's gonna be basically based on that force. Did we attempt to do a particular force task? And did we actually see that sensor feedback? Like I said here for the coaxial cable. We have in this, in this schema of what the robot does is it has different tasks that we ask it to do. One is kind of looking for when the robot goes to pick up the wax, does it actually meet the surface? So that's the first step did we actually achieve to find a surface plane we're going to go for, then it will go into a search mode. So what that does is actually kinda feel around like a human would. Are we actually inserting it into it once we have found that we apply some force to get it embedded. And then if there's locking will do more searching, more confirmation with that force feedback going forward. That's kind of the general process of what you do. You have something you want to go towards. You have different smaller bite-sized chunks. Whether it'd be search operations, locking operations, those sorts of those sorts of things. So one of the other cool things about compliance is it does start to take into consideration uncertainty. So if a human actually is around the robot, this is a demo, more or less just showing that with a compliant robot into compliant mode, a human can come up, kinda pushed the robot out of the way. It won't necessarily fight with you so long as that is one of its behaviors. So I'll get into behaviors and behavior trees and a little bit. But the cool part about it is that you can also, like I said, it will start to fight in different places. So you'll notice on the rail it actually will stop moving, but the arm will continue on. That's kind of in this grand scheme of using two arms. You need to find a safety zone that each arm can exist in without high probability that they're going to actually be colliding and harming one another. So you can either tackle that both in the planning phase and kinda go blindly through it. Or you can actually go with confidence. Having certain protocols in place. Joint locking, having safety zones within certain degrees of axes are things that are just good common practices to go about. And this is actually pretty fun because we've only locked one joint in certain degrees and everyone else moves at the whim of the operator. In this case, for the dual ARM architecture, which is the bread and butter of it and probably should have had this slide a little bit earlier. But this is a very practical, very practical approach. So you guys know that Eros operating system, I assume most everyone will know it by the end of their time here. But we have the, the beautiful and awful parts of Ross, which are, Ross isn't yet finished, but it has really great planning engines and print planning protocols. And here we use plants as to which not only does our work with PDF files and behavior trees, but then we get over to the difficult parts of roster, which is it doesn't actually have a UR driver that yet supports all the force, torque, all the velocity control schemes. So then we actually have to operate in flip over to Ross one. Break down a little bit starting from the top. Many people may not know what a PDD l is. And that is planning domain definition language or just a way to define your space. So the cool part about TDL's are that you more or less just define the general robot characteristics of it. You define the space that it's in. With that, that would be the Rail. Any USB is you're going to be plugging into those sorts of things and then you actually just give it a problem. So in the PDL, you would be asking that question of, Hey, I need USB-A to go into port, see, a simple thing like that. And what happens on the PDL solvers side of this. It's an AI solver that then goes step-by-step. How do we actually achieve that? So it goes all the low-level steps of getting the robot arm to move over to the USB. There's a grasping task. There's the search protocol, There's the meeting. Smaller level tasking. So it's kind of cool. But what? Sorry, I'm getting a little off topic, but the PDL, more or less in our system is preprogrammed in, is just that, hey, grabbed the coax cable, put it in the slot. From there, we actually go flowed down into different behavior trees. This behavior tree handles all the different steps of it. So if you've played video games, this is a lot of the AI underground, like underlying network on how in AI would make the decisions in video games. But it is more or less just a fancier version of finite state machine with other added benefits. From our behavior tree, we flowed down to the actual kind of what I would call robotic skills. And this has to get a priori knowledge of the space. So the a priori knowledge, I don't have it necessarily labeled here, but lives in a database. That is the actual characteristics of the general location of all of our USBs, all the coaxial cables, all of those things. Once we get that knowledge, we follow it. And into the different behavior tree actions that goes across the Ross Bridge down to the actual lower-level hardware controllers. And this has some big drawbacks in it, because it means that for every single Planning Agent. So if we have a meat stage and a plan stage and a grip stage, that not only has to exist in Ross tube, but that also has to exist in ROS1. Having up that many nodes across, not just one robot, but also to different robotic systems, starts to become very bloated. Has a lot of moving parts, has a lot of failure points. So one of the things we're actually going to be doing with the system moving forward is having ross to actually just have a manager node that then spins up only the particular hardware level controllers and hardware skills at the time of r1. So then separating out more so I'm getting close, but yeah, two minutes. It separates out the points of failure in that you will be guaranteed that when you need a node to be running, that you're actually in control of it starting up and shutting down. The last really big thing is how you control two arms with this is make sure your code is namespace properly. So it's kind of simple, but it goes a long way and makes us like one planning agent can easily control two arms. So long as all your code can actually isn't baked in and hard-coded to one thing. Yeah. That's it. Thank you. Any questions about Ralston? The commentary, this will be an opportunity for that to be injected as well. Sure. So in this case, we're solving the robotic collision problem. In the planning phase, we know we have two arms, we know we have safety zones. And really the planner is taking in both of those pieces of information in the PDL solver to say, Okay, well, when 11 arm is moving in, is it actually in a collision state with the other arm? So that's that's in the planning phase. No. So this would be more is partially based on the task because not all arms are doing the same sort of things. Like the other arm had a screwdriver. It could move in and out. But yeah, sorry. We can get into a little bit later, but yeah.