It's a distinct honor for me to introduce Professor Michael Raper to. Sort of thick and I remember my. Lot of the MIT and then he went to you to do a post doc and after that he joined the university as an assistant professor he's been there now a little bit three years and he works in a rather interesting I think he thinks that all. Of that. Multiagency ordination. The things that fly a lot of different exciting things that are coming together and in my mind he's one of the most successful and few researchers that can truly Kwai very pretty theory with things that actually work beyond just a here's a here's the one time it did work. Off of a lot of test but all of them. Kind of admired quite a bit and his research has been well received and recognised as a number of awards under his belt already the career of war from the US will face the nation the one thing that I think that the part though is he is probably the best dressed roboticists that I know. And we actually talk about this as characterize my love all pocket squares which I also think is and sweater vests and I was I was hoping to don a sweater vest but I couldn't find one there hard to say I see so. For sure thank you Magnus I was really very kind introduction and thank you all for having me here and being being here for lunch I'll do my best to provide some entertainment for lunch so I've never been to Georgia Tech before but it's kind of a place that I think all engineers have to come as a pilgrimage so I'm really happy to be here so I'm going to tell you today about my work with multi robot systems and my vision for using groups of robots for controlling and monitoring large scale environmental phenomena. So what do I mean by multi robot systems and how are we going to use them to somehow control or monitor environments so let's just kind of do a little thought experiment think about how the robot systems can be used in large scale environments so let's take some example environment like a forest got a number of robots physically situated in that forest I'm going to assume these robots have some means for extracting information from their environment some means of sensing this could be G.P.S. sensing is simple is that just they know where they are or it could be chemical sensors or it could be vision sensing cameras or whatever. They also have computations there's some means of reaching decisions based on what they sense and they also have actuation so they have some means of either moving themselves around in their environment or manipulating the environment maybe they're physically moving things in the environment or maybe they're actually deploying a chemical spray or something like this so some way of creating an effect in the environment and in this way I like to think of multi robot systems coupled with large scale environmental phenomena as a feedback loop and we can use this feedback loop and approach it from a control theoretic perspective to think about how to extract information from the system in an active way or how to control the dynamics of the system and what makes this hard Well it makes this hard is there's no leader robot that's telling everybody else what to do so instead there's some communication network usually a wireless network maybe wife I be whatever and so this makes this gives us the requirement that everybody just has a little piece of the pie all the robots have a little piece of the of the global information and using the local communication with the neighbors and their local information local interactions they have to produce a cord in coordinating global effects so somehow we need to design distributed algorithms that it's produce some predictable desirable effects globally and this is what makes this problem difficult so you might first say you guys are all. In the robotics so you probably I don't have to sell too hard the idea of of autonomy here but maybe I have to sell the idea of the distributed nature of the problem a little bit because not a lot of people work in multi robot systems or distributed robotics so why distribute it well there's really a couple of good reasons one is a strictly engineering reason which is that if you have a distributed system you don't have a single point of failure so ideally in a distributed system one node goes down and the whole system keeps on performing and maybe there's a slight degrading performance but nothing too major whereas if you have a centralized system that creates one point of failure you knock out your server or whatever and the whole system is doomed. But there's another reason and one that's really my chief motivation which is the scientific reason which is that distributed things are just interesting from there intellectually interesting and scientifically interesting and to give you some examples of why I think of from a very engine earing kind of perspective think of the notion of an organism what is an organism other than a set of cells these cells are independent vidual agents that take bio chemical signals from their neighbors do some kind of biochemical computation and produce some good biochemical effect so it's a multi agent system and the effect of all of these local interactions is. Right people or other kinds of organisms so that's kind of miraculous is a miraculous proof of concept. Not only that but large organisms tend to group together into their own group so herds and flocks and swarms swarms of insects This is an example of a termite mound which is a particularly impressive example of distributed distributed algorithm among group of fairly simple termites to build this highly sophisticated coronated structure and also people the way that we move around with each other in groups the way that our traffic flows on the roads this is an example of a distributed system and even at it at a higher level of abstraction the institutions that we build our countries are go. Vermont's our cultures our economies these are all distributed systems so this is to me one of the most fascinating areas to work in is in distributed algorithms and distributed systems OK so let's get back to the problem of using groups of robots to monitor and control large scale environments so let's do a little. Thought experiment again about what sorts of capabilities we would want our group of robots to have to be able to deploy them in an environment and to take data and to eventually control their environment so primarily at the beginning we're going to want to deal with deployment and coverage so we're going to want our robots to autonomy Asli imagining we've got hundreds or thousands of robots we want them to autonomy situate themselves in the environment in a good way in a somehow equitable way or fair way for doing sensing and then once they've been collecting data we're going to want them to sample adaptively so collect where the data is more interesting collect in higher density in those areas so self maintenance are going to be out there for a while batteries are going to charge robots are going to fail we're going to have to have robots. Come back to a base station for repairs and the whole system is going to have to operate seamlessly to the to the absence or presence of new robots. So once the robots are out there collecting data for a while we're going to want them to compute some things we're going to want them to maybe estimate the value of some some environmental field maybe we're interested in the in the density of some particular species of tree we're going to want them to estimate that density in their network we're going to want to maybe have them collect data over long periods of time and learn to dynamical model of how this density of tree changes with the seasons for example and eventually we're going to want to use the robots to control this quantity of interest so for example maybe this density of trees maybe this trees are under attack by a certain kind of fungus and we want to deploy an antifungal spray to encourage the growth of trees in a particular area so this is kind of the scenario the set of. I'm so I'm interested in and of course it's not just limited to forests there's a whole world of application domains where this could be a really powerful technology just a few of these oil spill and chemical spill cleanup with a group of autonomy as boats automated agriculture is potentially a huge area where we can have groups of robots taking care of you know diagnostics figuring out which plants need water which plants need pesticide and then deploying robots to actually deliver the the water and the pesticide to really maximize crop yields without doing too much environmental damage then there's another one I'm going to skip over a lot of these but there is one another potentially huge heating application which is transportation systems right so autonomy scars are a reality the Google car is out there driving around on the streets of California and it's not going to be too long before we have freeways that are populated with half robots and half people and it's going to be a critical question of how do these robotic systems interact with each other and how do they interact with the people and how can we design the interactions to maximize throughput safety robustness eliminate traffic jams and things like this. OK So this is kind of my my thirty thousand foot view of why I'm interested in the robot systems and the kind of questions I ask let me tell you a little bit about my lab now it be so I run what's called the multi robot Systems Lab and currently I have seven grad students in fact Austin Jones was one of my students at least half advised by me and half advised by calling Delta he's just graduated he's going to be a post-doc here at Georgia Tech. And they've got several other students also post-doc are men who are meant to tie who's moved on to Penn State University let me tell you about the work of some of these folks as we go on so the lab that I've set up at B.U. it mostly consists of a large empty space which we call the flying arena and most of what we do in here is flying aerial vehicles and also deploying ground robots. And this thing is equipped with a motion capture system and I think I'm preaching to the choir here I think everybody here knows what a motion capture system is set of cameras that gives you pose information about whatever you put in the environment right so we use these we use this motion capture system for high precision feedback to study controlled loops right so we can use this thing is basically an ideal sensor and then we can rapidly change our control strategies and study empirically how our control strategies are working without worrying about uncertainty in position or whatever so this isn't to say I'm not interested in perception or estimation I definitely am but we see this tool as basically a way of separating estimation from control and being able to study the two independently and then eventually we're going to put these two together when we go into ploy in the wild. So we also have a floor projection system in our lab which is a bit different than many existing projection systems this thing is so it's large it's about the size of a movie screen and it's built from six independent projectors that tiled together like a multi multi panel display so the projectors are mounted close to the ground they have a wide cone and then they tile up to make the image and the reason why we do this is so that when we fly quad rotors they don't cast a shadow so we fly the quadrotor is above the projection cone and this is nice because then we can use cameras on board the quad routers to study the sort of hardware in the loop. Sorts of experiments where the quad rotors behave as if they were flying over a forest for example so we can play the scene of a forest maybe there's a forest fire the quad rotors have a camera so they can take pictures of what they think is a real forest and then we can simulate everything in the loop without actually having water heaters out there in the environment. Outdoors OK So let me show you some examples of some of the experiments we run in the lab. As I said a lot of our work is with quadrotor is just an example of a quadrotor sure all of you guys have seen. Something like this so part of what we've developed in the lab is a whole set of control theoretic and software tools for flying quadrotor through agile trajectories like this so I'm going to tell you later the sort of what's running under the hood for this for executing these agile trajectories and planning these for directories also fly with groups of quadrotor So you decide to quadrotor is here is for quadrotor is also interested in questions of human swarm interactions so this is actually an example of one pilot flying a whole group of quadrotor and I'll talk more also about how we do that and is an example of a quad rotors so we're interested in scale ability and interactions between the agents we're also interested in flying fixed wing vehicles is a tiny fixed wing airplane it's about three grams it's really delicate very simple little airplane and it's flying autonomy Asli put a ton of the same quotes because of course we have the OP to track system where computing control laws off off board but it is flying atomicity tracking that went wand and we also do ground robot experiments so here's an example of what's called Voronoi coverage control which is a fairly common way of doing deployment in coverage for groups of robots. OK I told you about my vision for multi robot systems will be a bit about my Lab let me get into some details on a couple of the research projects that we work on so I'm going to have time to talk about two things the first one is going to be deployment in coverage and then I'm going to talk about this work on coordinated agile maneuvers with groups of quadratures So first deployment coverage. What is deployment coverage OK remember going back to this example with robots monitoring in a forest deployment in coverage is just. Driving a group of robots from some base station or from some arbitrary configuration to a somewhat equitable distributed configuration in the environments this is been studied quite a bit in the literature from lots of different perspectives so for exam. There's the voronoi or geometric perspective in which the robots divide up the environment into cells and essentially each robot is in charge of its own cell there is a really beautiful paper that sort of spurred on a whole subfield of research this one by Cortez Martinez cross and blow back in two thousand and four I've done a bunch of work in this I know Magnus and his students have done work in this and probably other many other people have as well. But then there's another completely different perspective on coverage control which is to position your robot stop to my some kind of probabilistic metrics so for example to position a robot such that they maximize the probability of detecting an event when events occur when an event occurs so maybe you have some probability distribution over events and you want to maximize the likelihood of seeing something when it happens there are other flavors of this but the idea is that there's some probabilistic underlying objective that you're trying to optimize and then there's a third school at least one other school maybe there are other ways to do this is well but the third school it's based on artificial potential fields and this perspective says OK just put artificial Springs between the robots and these springs are going to cause the robots to push off each other and disperse into the environment and then you also design artificial Springs to push off against the walls and eventually the robots are going to spread out over the environment so it's quite a kind of mechanical analogy it works very well in practice it's a nice or a stick. OK so part of the work that we've done is to show that all of these three perspectives all of these three approaches to coverage control boiled boil down to different examples of the same kind of optimization problem so let me describe what I mean by an optimization problem and then I'll show you a different kind of coverage control problem and take it all the way to experiments and show your results OK So deployments as an optimization problem how does this work so imagine we have a group of men robots we're going to say these guys just. When are to they just live in the plane then we can think of the concatenated state of all of these robots so some point in are to the two and so every point in this are to the two when corresponds to a neat unique configuration of this group of robots in their environment right then if we can somehow come up with a cost function over this art of the two and that includes how well we're doing with with sensing or with coverage then we can turn the deployment of these guys into an optimization over this thing and now the trick becomes designing this cost function where do we get it from what's the right way to build this cost function so once we have this cost function right we have a cost associated with every different configuration and then a fairly simple design methodology presents itself which is gradient based control so we take gradients of this thing and then as you can imagine we can drive these robots in the negative direction of their complement of the gradient and the whole group of robots is going to settle into some local minimum of this cost function that's going to correspond to a locally good. Distribution of robots in the environment that's that's good for sensing so you might ask a couple of things at this point you might say why do you show a non-convex cost function why do you have all these local optima And the answer to that is that for problems like this for multi robot coverage control problems the cost function has to be non-convex so you can actually prove it with under some fairly generous assumptions you can prove that it has to be non-convex And in fact if the cost function is convex and you're trying to optimize some kind of a cost function under again some fairly mild conditions you can show that the globally optimal configuration is consensus all the robots sitting on top of each other which is obviously not a good thing to do if you're trying to deploy the robots over the environment so we're working in an inherently non-convex domain and because of that we have to be satisfied with locally on. The most solutions this is this is commonly true in multi robot control that you can bridge to locally optimal solutions very often in practice these solutions are are good enough one is as good as the other and maybe the global optima are not that much better than the local optima. OK So this is kind of a method design approach right we build a cost function we take gradients and we use those to develop feedback controllers so let's instantiate this of the particular problem and that particular problem is deployment of a network of cameras so the idea here is that we have a group of quadrotor as they have downward facing cameras and they want to move upward and outward over their environment such that the fields of view of their cameras tile up over the whole environment so the whole environment is covered with their cameras OK so we're going to first build this cost function this coverage cost function and then we're going to take gradients to get the controllers so. With that in mind let's start with a simple model of the sensing that's going on and think of what is a as a reasonable metric for cost so let's think of our camera we have one pixel in our C.C.D. or a or CMOS array Now imagine that the patch on the ground that that pixel sees that's going to correspond to this orange thing here and you can imagine that pixel is essentially an information compression device right so whatever complex imagery is in that patch it's going to project on a single pixel and it's going to be represented as just a few numbers so just R.G.B. and brightness for scalars right so whatever complexity we have in there it's going to be lost in this is the information compression we want to minimize this loss so this suggests the metric area per pixel is the Seems a little strange but the way to interpret this is we want to minimize the area on the ground seen by a single pixel this is the cost that we're interested in minimizing You know this is going to depend on the position of the camera which I'm calling P. and the specific point on the ground that we're talking about which. Calling cue it's clearly going to be related to the height of the camera because if you go up the field of views are going to grow and you're going to see more but you're going to see it at a coarser resolution and as you go down you'll see less but at a better resolution it's also going to depend critically on the field of view of the camera right because not every point is going to be inside the field of view so whether or not you're seeing a point depends on where the field of view is and we can go to our optics highschool optics textbook and find out exactly how these things are related so it turns out it's going to be quadratic in the height and the cost is going to be infinite for points that are outside the field of view this is true because there are no pixels looking at points that are outside the field of view we're going to come back to this problem of infinite cost in a second OK So that was one camera what about multiple cameras and specifically what if we have multiple cameras that are looking at the same set of points in some area in the environment so I'm talking about the intersection here of their fields of view how do we tally the cost for points in this intersection do we add the costs from the separate cameras Well that doesn't make sense because points that are viewed by multiple cameras ought to have a lower cost you just don't want to add them right it turns out the right thing to do is to invert the cost from each individual camera some over the inverses and then invert that sum and this is going to give this kind of diminishing returns it's going to sub modular characteristic where adding cameras to regions does improve things but not very but less and less as you add more and more cameras. And then we also have to add this little term that I'm calling a prior area per pixel which is basically for regularization So I said that points that are not seen by any camera have an infinite cost and I want to take gradients of this thing so I don't want infinite costs so this thing makes this cost bounded for all points and it also has a simple sort of engineering motivation which is that if I'm doing coverage over some area I probably have some very very. Big prior information about that area maybe it maybe a satellite photo so this is incorporated in this term W And then we can take this thing which is the cost of all cameras sensing at one point Q. integrate over all points Q. in the environment and here's my cost function so this thing basically includes how well Am I sensing with all my cameras over all my points in the environment and from here on out it's just fairly formulaic you take the gradient of this cost function this gives you. Control feedback control laws for your robots these turned out to be quite non-trivial fairly mathematically complex things. That depend on your own position and your neighbors positions in. In fairly complex ways but what I want to demonstrate is that each of the terms that result from this gradient based control have a nice interpretations as a kind of a sense that comes out of these gradient based controllers so one term tells a robot to move towards the interior of the environment because if there's any piece of its field of view that's hanging outside the environment that's just being wasted so you want your whole field of view to be used and there's a term that says move away from your neighbor because my field of view is better used on on an uncensored part of the environment than a part of the environment that's already sensed by my neighbor so I want to separate it away from my neighbor I also want to go up because when I go up I bring more points in the my field of view so I get the benefit of having a larger area but but I also want to go down because when I go down I pull my field of view away from my neighbor and that's good so there's less overlap and I also get a benefit of magnification when I go down so I get a better view of what I'm already looking at in my field of view so essentially you can think of all of these force components dragging each of these quad rotors around and eventually They're all this force complements are going to cancel out to zero and when they do this exactly corresponds to a local minimum of the sensing cos function and. And intuitively it corresponds to a good configuration for cents. For these quadratures. OK so I've basically motivated this in a fairly intuitive way so what kind of analytical teeth can we put on this well turns out we can use fairly standard non-linear control theoretic tools like Lascelles invariance principle the function for the system comes directly from the cost function that we used to get the controllers from so because it's a gradient based controller the time derivative of the cost function along the trajectories of the system is sort of by design not increasing so it's going to be less than or equal to zero and we apply the standard chain of reasoning and we find that the robots can virtual local minimum of the cost function. OK so let's see how this thing works in practice is a simulation Matlab seem simulation five robots there's the cost decreasing as you can see and you can see what's going on the robots naturally spread out in tile up over the environment so here I'm using rectangular fields of view which is which is true to reality right cameras fields of your rectangular what I showed before was circular but it's easy to add in this degree of freedom in the angle and take gradients with respect to that and get a control actuation with respect to you as well. OK so. We then deploy these things in the lab Here's an example of experiments with three quad rotors you can see again the costs going down and the point of why I'm showing this to you is to show you the inherent robustness of this gradient based control design methodology so my colleague Brian Julian is going to enter just a second he's going to simulate a failure so one of these quadrotor suppose they're out in the forest one of these things just got attacked by a hawk or something it's got taken down or as batteries fell out or got retasking to a different task whatever so there's just two more quad rotors and these things naturally reconfigure to account for the loss of this one sensor and they reach a new locally optimal sensing configuration and this happens naturally there's no decision. Here there is no machine learning there is no estimation it just falls out from the feedback controller when you lose a neighbor your network changes and you recompute your gradients and you're good to go so this kind of robustness is a really desirable property when you're working in harsh outdoor environments. So we tried this thing in the lab but we wanted to really go out in the wild and see if it's going to be useful for monitoring over real large scale environments so I teamed up with some people at Boston University in the earth an environmental sciences department and these guys study forests they study basically the changing foliage of forests and how that changes with the seasons and then correlate that with climate change they basically use forest foliage as a proxy for climate change and they're It turns out they're critically limited by the quality of their data so currently they use Remote Sensing Satellite based data and for them a single pixel corresponds to fifty metres on a side of forests which means it's the forest is a big blur you can't resolve individual tree crowns or individual features of specific areas in the forest so when they learned that where we were putting sensors on board quadrotor they're really excited about this about the fact that they can quickly obtain high resolution data that's really targeted to the areas that they're trying to measure so we teamed up with these guys we went out to the Harvard Forest which is a research forest and. It's actually Boston is hidden under this thing that's right about there here is this characteristic tough guy arm Massachusetts there's a forest out there in the middle of nowhere so we picked this plot of land over which to do our experiments is about one kilometer by one kilometer roughly and we took five three D. Robotics quad rotors out and programmed them with our control algorithm. So here's a simulation with the with the dimensions and in the correct environment set in here. And you can see that these things go up to a height of about seventy to eighty meters so this is really how high they were flying in the forest. And I'll show you a movie of our field trip out to the forest so we took a group of guys from my lab these guys were all trained quadrotor pilots and the way that we conducted these experiments is every quadrotor had its own dedicated pilot on my mark the pilots fly their quadratures up to some height you know probably ten feet or so and then again on my mark they flip the quad rotors into a ton of this mode and from then on they take off they're executing their controllers. So. You'll see in a second these things flipping into it on in this mode. OK So this is required by the F.A.A. that every quadrotor must have its own dedicated pilot they also all have to have line of sight contact with their aircraft so in the dimensions involved here you'll see in a second the view from the cameras on one of the quad rotors dimensions involved here such that you barely have line of sight I hope there's nobody from the F.A.A. here as I don't want to have to back that up we barely have line of sight actually it turns out the line of sight requirement is silly I mean you can't fly an aircraft just by seeing it as a dot in the sky critically important information is the orientation and if once it's reasonably far away you can't see the orientation you can't fly it so. What does that say about the F.A.A. rules I don't know but we were obeying the F.A.A. rules that in fact the more valuable information is the buzz of the motors so you can hear the buzz of the motors and you know that things are still alive and you can hear them kind of struggling if they're in a breeze or if they're flipping out of control you can hear that as well so you can hear you can see hear the dimensions involved right so these things are really far off over the forest and the people and the equipment are somewhere on the ground over here you can't even see them. So then after taking their images these things came back in the Miraculously they did all come back least in this deployment and then we we had them landing actually all in one column over our base station which is. Pretty impressive it's kind of a vision of the future when you see these things that honestly landing over your head. OK so we went back to the lab we took all the images that these guys collected in their deployment fed them into a structure from motion sort of three D. reconstruction algorithm and popped out this image of the forest this is really kind of proof of concept so the environmental scientists are really excited about this this is a functional model you can fly into it you can resolve individual features on trees and now we're looking at actually developing active sensing strategies using the vision so once you start with a baseline model like this how can the scientists say OK I really need more resolution here and I want to you know really focus in on the the yellow tones or something and then we can autonomy Asli deploy a quadrotor go and fill in this area of the map and then have those images integrated into the three D. model in real time this is the next step for this work. OK so I talked a little bit about the point in coverage from a control perspective so from a cordon Asian perspective this is interesting but from a control perspective it's not a whole lot of dynamics going on here and in fact what I'm really interested in is making my quad rotors fly in complex intricate coordinated maneuvers I call this coronated agile maneuvers is actually mostly the work of my grad student dinging zero. So think of a jet fighter demonstration team I want my quad rotors to fly like the Blue Angels. What is hard about this well it's because the quadrotor is don't have such simple dynamics quad rotors are not these silly little point integrator cartoons that I've been talking about. To drive this home Quadros have some complex non-linear dynamics they are not point integrators and to be more specific the dynamics of quad rotors they're centrally depending on how what kind of detail you want to model them with but at our level of detail these are essentially rigid bodies with force sources so each of the propellers just act as a force source. So we have a twelve dimensional state space of basic of three positional three locational coordinates and then we have the vocational rates and then we have three angles for orientation and then the three angular rates that's twelve it turns out of course the dynamics are non-linear but what's even more confounding is that the non dynamics evolve manifolds they're non-Euclidean because of the because of the orientation this is the A matrix that lives in S. So three so this could potentially be really difficult to deal with from a control theoretic perspective and to drive this home with an illustration imagine we just want to do trajectory planning so I'm not talking about feedback control understanding about planning a feasible trajectory for this quadratic maybe we wanted to do this flip around like this and I can I can't show you twelve dimensions on the screen right but this is a twelve dimensional trajectory in the twelve dimensional state space on the manifold and the question is how do we design this trajectory such that it's feasible meaning it can be executed by our quadrotor it obeys the dynamics of the quadrotor and for what control input does this trajectory results right so what is how do we obey this dynamical constraint and what is the control input that makes us do what we want to do this could be a very difficult problem in general it turns out miraculously that quadrotor dynamics are what's called differentially flat so differential flatness is a property of some special nonlinear dynamical systems there's a lot of rich control theory that goes by differential flatness I'm just going to go straight to the useful part the kind of the punch line of the differential flatness which is this differently flat systems you can represent the behavior of the system with a few what are called Flat outputs and when to call these with the symbol Sigma and for a quadrotor the flat outputs are just positions of X. Y. Z. and angle which is See here so you plan me a trajectory within these flat outputs is a four dimensional space and it's the kind of. This is sort of. The intuitive space that you would want to plan in any way this is the position space in the right give me a trajectory as long as a sufficiently smooth I can use two maps that together there's these masterbating gamma together called the in dodginess transform I can use these maps to get you X. which is the state trajectory of the twelve dimensional trajectory in the state space and that's guaranteed to be feasible by the dynamics of the vehicle and the control input that is required to make the quadrotor fly this desired trajectory so this kind of in one fell swoop solves our trajectory planning problem this is really nice sounds too good to be true it is there are definitely problems here are going to come back to this in a second but some background on differential flatness So this was something that was developed in the among the French school of mathematical control theorists in the mid ninety's and it was taken on by some really influential control theorist like Richard Mary and some others. It was just recently applied to quad rotors I believe that it was known that quad rotors were differentially flat but it was only recently demonstrated the power of that fact by Kumar and his student and Melandri and they found Michael as well and a couple of really influential papers and then myself and my student of taken on this torch and moving forward with this differential flight this idea for coordinating the behavior of groups of quadratures so I motivated this by saying I wanted my quad rotors to fly like the Blue Angels that's true but there's an alter your motive. So. The first half of the talk I assumed that my vehicles had point in a greater dynamics they were just particles moving through a vector field and this is a very common assumption actually in multi robot control so here's a kind of a list of some of the most. The most heavily sighted multi-age and multi robots control algorithms out. There so there's consensus there's formation control there's all this stuff most of the stuff is built on point integrator models for the robots as is the coverage control work that I've done so we want to use differential flatness to make robots behave like Point integrators That's step one step two is we want to go beyond that and make them behave like the Blue Angels. OK so what's wrong with this why doesn't differential flatness completely solve our problem. Because this is really an open loop notion this is where it falls short so the interpretation here is we prerecord these control inputs we play them forward on our actuators and if we have a perfect model if we have no external disturbances then the quadrotor is going to exactly execute the trajectory that we want to execute but of course we all know this is ridiculous there's always modeling errors we need feedback we need the robustness that comes with feedback so how do we turn this differential flatness idea into a feedback controller well as follows So here's our flat out which are directory again this is just the position trajectory that we've planned we put this into what I'm calling the differential flatness pipeline which is just essentially these two maps beta and gamma by the way these are analytical maps these are closed form things you can write them pencil on paper these maps pop out a feedforward control signal which is the control signal that would result in the ideal trajectory if we had perfect a perfect model and no disturbances and then they also pop out a reference trajectory so this is the state trajectory that we would expect to see in a perfect world and we can use this thing for feedback so we can measure with onboard sensors our true state or we can estimate it with an estimate or common filter and an observer or whatever or maybe OP to track and then we get an error signal and pump that into a non-linear control block which I'm just calling this c three controller not going to go into details about that but it's a controller that operates on the matrix group and then this gives us an feedback signal which is the. Part of it corrects for errors and when the when the quadrotor gets off the ideal trajectory of pushes it back on so we have this kind of feed forward feedback architecture we find this works very well in practice in fact this is exactly what's running under the hood here. So we planned this trajectory in position only and then we use this differential flatness pipeline to find the reference trajectory in the state space and also find the feedforward control input and then we use OP to track to find the true state and to get the feedback error to correct with OK So this is just a feedback control architecture question one how do we use this thing to make our quadrotor behave like a particle moving through a vector field so this remember opens up all of the existing multi agent multi robot control literature for quadrotor is how do we do this we have a nasty non-linear dynamics with these two differential flatness maps and then we've got this particle how do we turn this thing into this thing. It turns out there's a simple trick that allows us to do this so take the time derivative of this vector field and it turns out we end up with another director field which describes the acceleration so we had one vector field which is the velocity as a function of position do you use the chain rule we get another one which is acceleration is a function of position we can keep applying the chain rule this is all analytic all closed form pencil and paper stuff and we can write down any desired time derivative as a function of position. So this is interesting because our differential flatness maps here is in Dot and dodginess transforms require our our planned trajectory and various time derivatives of our planned trajectory so we can get these things analytically here plug them into our dodginess transform and this essentially closes an outer feedback loop so our current position is used in each of these vector fields and then these maps become just functions of our current position. So what this looks like is for the block diagrams and so here is. My first few Ford feedback controller that I showed you remember I used to have a planned trajectory going into this but now I've closed an outer feedback loop using these. Time derivatives of this vector field and this whole thing in closed loop makes my differentially flat quadrotor look like a particle moving through a desired vector field and it's true for a single quadrotor it's also true for a group of quadrature So this whole group of Quadro is differentially flat same tricks and I end up being able to prescribe the velocity of my quadrotor as a function of the positions of all my neighboring auditors So now we've opened up all this literature to be used with groups of quadratures so into flocking we can do collision avoidance we can do concensus all the stuff so let me drive home why this is important so if we were operating in a kind of studio pseudo static regime which is where most of the most of the quadrotor work operates this wouldn't be a problem but this allows us to use all of these say formation control algorithms and so on in a dynamic regime so we can really aggressively fire quadratures into a formation and that's exciting. OK but I told you I wanted my quad rotors to fly like the Blue Angels How is that going to work so we're going to use a conceptual tool which we call a virtual rigid body it's a really simple idea here's the idea in motion you can see there's something like a rigid body kind of flying around these points on the rigid body one two three there are the positions of quad rotors and the question is how do we get our quadrotor is to fly these trajectories to maintain this virtual rigid body configuration so to formalize this a little bit we're going to call a virtual rigid body just it's just a coordinate frame that's all so we have a world frame and then with respect to that we've defined this V. RB frame virtual rigid body frame now we're going to plan a trajectory for this view RB frame to do whatever we wanted to do maybe it's going from point A to Point B. it's avoiding obstacles who knows. I have a trajectory for the V R B So this is going to be a position of the viewer be in the orientation of the viewer be and then we can hang an arbitrary number of quad rotors in this reference frame right and each position of each quadrotor is determined by some vector are one or two or three and we'll call this a formation so a set of these positions vectors a formation and then we can quite simply extract the desired position trajectory for each robot that's required for that robot to hold its own position in this formation everybody flies their trajectory is correctly than they follow the imaginary formation right and now this gives us exactly the required inputs to our differential flatness base controller so we can make this trajectory for the virtual rigid body very aggressive we can figure out what the differential flatness trajectories differentially the flat output your directories are for the quadrotor is use our differential find this base controller and follow these trajectories and we're not only stuck to one formation we can have a whole sequence of formations so maybe have a triangle and a line and then we can plan trajectories in the V. RB formation to reconfigure from the triangle to the line and this reconfiguration happens while the whole reference frame is executing its own trajectory so flipping around in translating this is what this looks like in practice is three of our quadrotor robots in the lab they move into a triangle formation so now their virtual rigid body is executing basically a circular trajectory like this and it's also rolling at the same time so this is the overall structure is doing this if you can envision that and then the formation that the quadrotor is maintain within that structure is changing so first they start of the triangle formation that is reconfigured into a line formation they're going to reconfigure again into a differently oriented triangle formation all the while their view RB is executing the same trajectory. And you can see with these fairly simple tools. So we can layer up some fairly complex. Interactions among the robots so this architecture is partially decentralized what I mean what I mean by that is the trajectory for the virtual rigid body has to be computed has to be the same for everybody so we compute that off board the quadrotor is and send it to all the quad rotors or maybe we've precompute it in load it on the quad rotors but from that point forward the quadrotor is can compute everything they need to compute on their own on their own without. Too much or without any or any interaction actually between the quad rotors and then we can also to design local interactions to keep the quad rotors from collect colliding with each other. OK so we can preplan trajectories for quadrotor is to behave like the Blue Angels now we're interested in using this abstraction of the D.V.R. beat for human swarm interaction so the problem here obviously is if you have one pilot and you have one hundred or thousand robots how can you simplify how can you create an abstraction for that pilot to interact with this very high degree of freedom system and then also cleverly engineer autonomy within the system so that all these extra degrees of freedom are taken care of so the way we do this is we have the pilots this highly skilled This is not doing Xing my grad student by the way this is some kid that I pulled off the Internet. He is going to fly the V.O.R. be as if the viewer be itself where an aircraft and the V R B has no dynamics this is just a virtual concept right so we ascribe dynamics to it we give it we impose some dynamics in our case we're going to just give it the dynamics of a quadrotor we already know those dynamics Well we don't have to we could give it the dynamics of fixed wing aircraft or whatever but we're going to choose to give it the dynamics of a quadrotor and then the real quadrotor is that are inside the viewer B. are going to take care of their own control to make their formation behave like a super quadrotor And then there's an extra degree of freedom here which is that. Our pilot can then select between different formations fairly naturally right so imagine we've got a farmer he wants to apply pesticide to his field so he prefers to have some even grid formation they fly over the fields and then he needs to move them into a line formation to go through maybe a grove of trees and then he needs to go back into a triangle formation to do more pesticides and so on it's a fairly natural way I think for one individual to interact with a large number of robots and this is what's going on in this video so here dinging is controlling with one control stick the motion of all these four quadrotor and you can sort of see how the group is itself acting like a quadrotor as you can imagine that you can kind of see is one side moves upwards the whole formation moves to the side the other side moves upwards the whole formation moves to the other side and he's yawning and so on so he's making this this trajectory complex on purpose but it's very easy to control these guys with the one control input with this abstraction. OK so I talked about the point in coverage I talked about coronated maneuvers. I'm going to short on time so I was going to give you super high level description of three other projects going in on in my lab but I'm going to skip those and go right to my summary. OK so I talked about the robot systems in general I showed you some work with doing coverage control with groups of quad rotors with downward facing cameras and then I talked about controlling groups of quad rotors in agile coordinated formations using this idea of virtual rigid bodies and differential flatness for control and I want to leave you with one thought which is that I think multi robot systems are going to change the way we interact with large scale environments in a fundamental way so they're going to give us a new instrument to extract information from large scale environments and also a way of eventually controlling the. Dynamics of large scale environmental phenomena and I think it's a really exciting place to be doing research so I'm going to say thanks to all my collaborators and my funding sources and thank you all for your attention and I'm happy to take any questions. That's excellent. I'm really glad you asked that's the literally the next thing that we're working on right now this is this is a huge practical issue which is that if you if you have some formation and you fly the formation around and then you want points in the formation to track that formation then those points are going to require potentially very high control inputs from the from the motors on the quadrotor is that they may not be realizable you may actually require reverse thrust which is not realizable So there's got to be a way of working the control constraints into the planning from the beginning that's true so the way we do it now is just iterative so we design a formation we design a trajectory for it we simulate we see if control constraints are violated and then we rinse and repeat that there should be a more elegant way to do this. Yeah sure. Yeah. It does yeah I actually read that. Yeah yeah so yeah. That's a good point because you naturally put that into the optimal controller formulation. That's an excellent point yeah so I have kind of moved away from this approach to coverage control and move more towards an optimal control formulation of coverage control. Yet no not the difference I thought stuff this is like. Basically planning trajectories to minimize the air covariance matrix on a common filter sort of thing but there's it's optimal control under the hood so in that the the yes the constraints on control effort fall in Naturally that's a great point. Yes. Yeah. That's an excellent point yes yes and the gradients are distributed not all the time it depends on the problem in the domain. Yeah yeah there you go. There's no you're. Right. That's right that's right yeah. Yeah. I went over there. Yeah. Yeah. Yeah. Yeah there's a tradeoff here so if if we remove all feedback then it becomes not really a multi robot system because there aren't direct interactions but it's very robust so I can remove one guy out of the out of the formation and the other guys keep going. But we're actually striking a balance here so we actually have a formation controller that's running online at the same time as these guys are executing their trajectories so they will slow down those kind of a blending of the formation control that which tries to maintain a relative position with respect your neighbors and the trajectory following. Yeah you know. Like. So computationally it scales no problem the coverage control stuff you can literally it's the same control on every every quarter versus throw as many as you want in and the half of the only requirement is that they have to talk to any other quadrotor whose field of view overlaps with their own which is naturally a distance based kind of a metric right so it's going to tie into the apology of the network in a fairly natural way but yeah you can. You need to know you need to know positions of your neighbors. You yes you actually do need to know local topology that's true. Yes you do need to know local topology. But this in but it's fully scalable I mean you can conceptually you can put in a thousand there are one hundred thousand robots and it's not going to increase the computational complexity. The problem is the complexity of the machinery right you can only fly so many quadrotor is before one of them breaks. Before you run out of batteries the differential flatness stuff there's no computational difficulty there and the way that we've designed this virtual rigid bodies idea it's also fully distributed once you have the trajectory for the virtual rigid body so so you need to be able to broadcast that virtual rigid body trajectory which is just a six degree of freedom object to everybody in the network as long as you can broadcast that to a thousand robots and as long as all these robots know their vector within the virtual rigid body then it's fully scalable. World. Yeah OK. Everyone. Yes that's extremely important that is a really good point yeah so if you could but I only say that because I mean that's one of the other things I'm working on this is dealing with. Different reference frames for different robots but you know this is something that's hidden I think beneath a lot of the multi robot literature is that you need a common reference frames you need when I'm measuring my relative position with respect to you it's got to register with your measurement of my of your relative position with respect to me these things all have to line up otherwise you're going to get totally inconsistent formations you're going to have collisions so here we assume everybody's got a common reference frame but we're working on techniques that don't require that in fact there is a if we just wrote of interim A B. and go blathering on too long we just wrote a paper that I'm really excited about that basically characterizes all of the all of the. Multi agent controllers that only have pairwise interactions which sure as c three invariant are actually S.C.N. invariant so nobody needs any common reference frame this is a work by one of calling Delta students named. Christie which Christie's last name let's see you see I still have a really exciting stuff so there are ways to get around this this requirement of a common reference frame. Thanks.