My name is Panagiotis Tsiotras. I'm the Director of Dynamics and Control Systems Laboratory in aerospace engineering. I'm also Associate Director in the robotic Institute, are responsible for the thoracic area for ton of autonomy. So I'm, Seth asked me to give you a brief overview of what we're doing in my lab. I thought would be appropriate to talk a little bit about my research philosophy before I give you a couple of projects that I think reflects this philosophy. So my KM control theories by training the last eight years or so, I've been moved more into robotics. We miss that I'm looking for more application as opposed to just proving theorems. Nevertheless, in my lab we focus on solid theory and we apply this theory for different applications. So the nice thing about control, as far as I'm concerned, it is agnostic to the application. You have a differential equations and you can apply to many different areas. So we apply our working space vehicles, idea of vehicles, ground vehicles. If I want to summarize, my philosophy is this motor here from Georg Cantor, a very famous topologists mathematician. It is in Latin. So if you don't read Latin and read it to you in your mathematical as prokinetic crunch genome flutist fascia into is qualm solvent. We're basically means that in mathematics is more important to raise the correct, the correct question on solving the problem. So we're asking the right questions, I think is half the battle. So my area right now can be classified approximately 23. Big areas, motion planning, dealing with uncertainty for decision-making problems and stochastic games. So I chose two of the areas to present here. Some of the work that we're doing. Because I have only 15 minutes. And so I'm going to discuss a little bit about dealing with uncertainty and then a little bit above motion planning. So uncertainty is everywhere. And the way we deal with uncertainty is through feedback. So for those of you who have taken control theory, especially those of you who were born after 2 thousand, which I think most of you, typically you get to control fears if feedback is for stabilization. Well, no, feedback was invented to fit. To fight uncertainty. More systems actually are stable, right? So feedback needs to fight noise and uncertainty. And it is a problem that actually boils down to controlling of distributions of system trajectories because you have several realization. So the point here is that we're trying to deal with these types of problems where you have uncertainty in the system. And we formalize the problem as a problem of controlling distributions. So in terms of more fundamental paradigm, control paradigm, here we call this a control synthesis problem, where instead of controlling a system with uncertainties, we directly control the uncertainties, which a little bit different than what you have seen. So I'll give you a couple of examples. So let's say that we're looking at. So this is a problem. This is a project that we have with nasa. A taxi wants to land in a heliport, okay? And then there's wins. So push you around, but you, as you move down and you want to land, you want to get anti that no matter what the uncertainty is, you're going to hit a particular ellipse, which is the report, right? As you're going down and you want to guarantee that you're gone, you're not going to violate certain constraints in this case will be some cone constraints, field of view constraints or something of that nature. These are of course is a probability. Constraints are called chance constraints we're familiar with. So anyway, so you have to deal with these types of problems, right? So you want to guarantee that no matter where you start, you're going to end up with this smaller ellipse, for example, with guaranteed right. Another example similar to the previous one, you want to maintain a cone, right? So you're trying to learn this aircraft, there may be disturbances at the end of the day, you want to guarantee that you're going to remain in this corridor without violating the code with high probability and then you're going to land, right? So it's a problem of tried to focus that the distribution of the trajectories red for some initial distribution and the boots, a final distribution within a certain ellipse you wanted to do it. This is a problem in a finite time problem in top, typically you may have other things to worry about like minimizing energy, minimizing fuel, and so on and so forth. But the point here is, I want to mention is that you have constraints that you want to write. So this is another problem. This may be a little bit more intuitive. Let's say that you want to go to, let's say Mars or another planet. As you can see here. These are different ellipses of London ellipsis that we have achieved for the last 30 years or more. I think that we are exploring Mars in this case. So 191976, that was the uncertainty ellipse of the Viking mission missing, meaning that we guarantee with high probability, you're going to land in this ellipse down here, which is 174 by 8062 miles, anywhere in there is okay, because it is a success as we make the algorithms better and better. You see this ellipsis becomes smaller and smaller. And future missions, we want to make this ellipsis very small. I mean, like a 100 meters or something like that, right? So the reason these days when we land on Mars, we have a little robot on board is because we cannot land or we don't dare to land close to large geological formations, which is the interesting thing. Jawless just want to go and figure it out a little rock, but we don't do go to the rock. So we land in the middle of the desert in mid-March and then we have this robot go around and hitting rocks and stuff like that. So we like to land next to the rocks, but but not on the right, right. Also, depending on whom you ask in the future with if we go to Mars, humans go to mind. I'm not sure if it's a good idea or not. But anyway, if you go to Mars, the way it is, the current architecture is that we're not going to send everything at the same time because it's too expensive and too complicated. So most likely we're going to send food, water, oxygen, whatever in tomorrow's. Then the astronauts will go there afterwards. So obviously, when you go there, you have to land close to where the supplies are, right? Because if the astronauts land or 1000 miles away from the supplies, well, I guess what would be a bad day for everyone, right? So to make a long story short, you want to get anti certain things that you're going to land in some ellipses are exactly the same problem I just mentioned, where you have to some initial conditions. There is uncertainty when you enter Mars. You don't know really where you are, but you really want to guarantee that you're gonna end up this ellipse of 100 meter by a 100 meters, whatever. This is. Another problem that we're looking at. This is racing problem where you can control the uncertainty. You can see we can do different things if you don't control it. I think the ideal racing line, for example, if you don't do anything, because many realizations that expand if you start controlling the uncertainty using what is called cube MPC, which is the standard way you create a tube. And you want that you've not to violate the constraint so you move your mean, right, The whole movie, the whole distribution. But if you can control the uncertainty that we can control this tube, you can probably be more aggressive. Alright, so in order to be able to control the distribution, you can do more fun. So let me show you some videos. As I mentioned, we do a lot of stuff on racing. I don't know how many, how many of you know about this author rally. This is er test facility that we developed here at Georgia Tech. This used by at least three other faculty members. This is we have to death traps. This is the one that up in Cobb County, PCRF. This is an example. This is the this is the, this is the, the, the platform is about a meter and a half, goes pretty fast actually. Here we tested against standard linear time-varying MPC. So here you see our control performance better than standard MPC. There's a PDF play I placed anyway, so it is completely autonomous, going around, racing around the track. You can do things like that. This CSC, MPC, which is our controller, which basically stands for covariance dealings stochastic MPC, which actually uses this idea of controlling the uncertainty. Let me show you another another example of this. Whoops, sorry. Here's another example. Similar idea, we apply this to the so-called MPI. Mpi is a model predicted path integral control with essentially solves the problem by rollout. So you just basically do a bunch of rollouts and you inject noise into the system. So you look everywhere and in some way you choose the best one. Actually don't choose the best one. You take the average or weighted average of some sort. You can figure out the situation that you want to explore. But that's too much, right? And maybe too difficult to find a solution. If you don't explore too much, then you don't explore enough. What you would like to do is to have a case that you explore as much as you want at the beginning, but then you can bring the trajectories back like, like in this scenario here for, for example, right? So here you can export it in feasible region here under Explore. And what you would like is to read the controller does bring it back right where you want to control the uncertainty at the final time. So we have implemented this recently. So we have this CS can control covariance, sorry, covariance, steering and P PI controller. You can perform much better than the standard and PPI. 98% success rate is going faster around the loop. This is a simulation, but we have an experimental test, but this was actually one of the fellows were Best Paper Award for the elastic ground in the planning session. Okay, so let me go to motion planning, which is close to my heart because I'm a control engineer. So that's what is optimal control as far as I'm concerned. So here's what we have done here is to develop sampling-based methods for solving problems, high-dimensional spaces. This work matches going back almost ten years now. You can solve puzzles, you can solve a manipulators and things like that. But nevertheless, I want to ask you, this is not just simulations. We actually tested this on real systems. This was a project with the INR, without autonomous helicopter wants to operate in a big area. If your grid, this space, right there was like a 50 kilometers by 50 kilometers by 10 thousand feet and their solution was 25 meters. If you scale that, if you discretize this problem, you create a graph. You get the graph which is 80 million nodes and almost 11 billion edges. You can hit it with a star or anything you like. It's going to take a few minutes. It was not enough to replant the, the Air Force, sorry, the army. In our Actually this case, the Navy wanted this to replant every few seconds. It means that you have somehow reduce the problem. So we use the sampling-based algorithms. I was implemented. It logged more than a 100 hours. Nicely. Nice execution. Now I think they're using something else. But the time I was almost seven years ago that we're using as the main lever planner to give you an idea of what you can do with this sampling-based methods. One of the last thing I want to mention, rare these days we're looking at more complicated things. When we're looking at the joint semantic and geometric planning, we have this algorithm is called Class Order a star. It's a version of a star that takes into consideration semantic information. What is semantic information? You want to go find the shortest path, but you may want to have information about the environment such that, well, there is a road that is gravel, there is grass or whatever, and I would prefer not to go over gravel, writes, things like that. I would prefer not to go over grass or I like this or that, right? So we have classes about the obstacles are about the environment, and you want to take that into consideration. So we can do that. And actually we have a version of a star that takes that into consideration, essentially what it does. It prioritizes different ages with different colors. That's why it's called calc class order a star and takes that into consideration when he's planning. Okay. I don't want to get into the details. I just want to give you an idea what you can do with these things. We have applied this in the multiagent path finding problem where essentially we use this semantic information to solve problems of many agents. The way it is done, it is the classical multiagent path planning problem, which used in many warehouses and things like that. Where you have a lot of robots, they want to move around, not collide. Essentially what you do. One way of doing it is to solve each robot from its initial to the final location, assuming that nobody is around you very optimistic. And then you look for collisions or you look for for conflicts and you're trying to resolve the conflicts. But this could be time-consuming. So essentially what we did, we use this CoA start as a low-level planner. So actually we use this information about good or bad digits in the plan based on a coloring basic, or coloring of the edges. Whether you have a conflict or not. To make a long story short, we are able to solve instances of modal 100 agents within a second. Here's an example. We have a bunch of agents that move from different locations to whatever they want to be. I don't know if you can see the standard method, state of the art. Couldn't find a solution if in one after one minute. Okay, so I think that's my last slide. I think I'm on time, right. So if you have any questions, feel free to just the other agents or is your environment in this case just the other agents? If you, if you know that, well, the other agents are moving obstacles in some sense, right? If you have moving obstacles, I mean, if you're, if you have other moving obstacles, you can add them. But this is static environment or their agents and you want to go from initial location to the final location. The agents are colored here. They go from a particular gears to another location without colliding, right? Yes, sir. Okay. Other questions. Thank you.