It's a great pleasure for me to state Michael to make. Your venture Pennsylvania two thousand and eight. You work here as a researcher it was just research from. Well. First. To see if you where you said the Robotics Institute a research professor made works on a number of things on doing. Mapping for ground robots and flying robots and today he's going to talk about. A very few. Of ours. They. Make a lot of that. Treat. Thank you thank you very much and thank you everyone for coming so as was just stated I am now at CMU and I join the CMU faculty research faculty at the beginning of this month so I was at Penn until August thirty first so today I'm going to talk about three D. exploration and indoor outdoor autonomous navigation with a quadrotor and the total to talk switched around just slightly from what was said before and that's just the order in which I want to talk about them and so the two main things that I'm going to discuss today are how to create an aerial robotic system micro aerial vehicle system that is able to explore autonomously in an environment in three D. and this is quite a bit different then or the approach that I'm going to propose this is a bit different than existing strategies for exploration and the primary reason for this is that micro vehicles are fundamentally limited in how much payload they can carry due to power constraint and because of that they have limited on board sensing and they have limited or more processing and consequently you're not able to do as much in real time Additionally sensors tend to be too deep or have some limited field of view and so in order to be able to do Rich three D. exploration we have to think about a different strategy to be able to do that. Additionally. I want to note that the papers that are primarily the basis for this talk are appearing right now in a special issue so I'm going to plug it right there so there's a special issue coming out for three D. exploration in I J R And so this this work will be in that. The other topic that I'm going to discuss is state estimation for indoor and outdoor operation so if we're interested in applying aerial robots such as these to search and rescue scenarios we want to ideally be able to go to a building in and just let the robot fly out maybe ask the robot to fly into that building and then explore the building so the first part of the top will be about how we tell the robot to explore the building the second part of the talk will be to discuss how we actually formulate the state estimation problem so that the robot can operate in indoor and outdoor environments and transition between these different environments and the main difference is there really have to do with in the context of state estimation different types of sensors relative global frame observation sensor failures and so I'll talk about how we deal with those challenges the talk is going to be divided into three parts the first part of the talk will focus primarily on the base foundation for how we go about solving the micro aerial vehicle navigation problem so when I say that's what I mean and so I'm going to briefly go over our approach and this should give you an idea but the intent here is not to make this the focus of the talk it's purely to serve as the foundation for the rest of the talk so we can talk about those research problems I'll then go on to discuss the three D. exploration approach that we're proposing and what those experimental results were demonstrating and then I'll continue on by talking about the state estimation problem so let us get started by first talking about what I mean by autonomous micro aerial vehicle navigation. So when I talk about navigation versus exploration this is what I mean that in the context of a lot. On this navigation the operator is going to provide high level goals so this is the key point is that the operator provides the high level goals they ask the robot to go to a specific location perhaps a waypoint or so some goal destination in the current map autonomous exploration is one. Where the vehicle is able to determine the goals themselves since that can come you know and so the vehicle itself is looking at the map it is making the decision so the difference here would be turn the robot on walk away or turn the robot on and engage the robot to encourage it to go further and explore the environment the approach to the autonomous navigation and it's being illustrated on the left here and this is really the systems paper that describes this approach is from. Two thousand and eleven the approach that's been illustrated there is basically one that is subdivided into three parts and these are standard things in the area of mobile robotics so if you're familiar with the basic topics the system diagram components are only only of interest if if you really want to dig into how this is done but the three the three substantial parts of this approach are the localization of mapping which takes into account the pros estimation of vehicle based on I am you and laser information as well as the slam so this is the simultaneous localization of mapping with a loop closure component and that's going to be based on visual imagery. There is also an aspect that is the planning component so the user will hand to the vehicle some high level call some way forward that waypoint will then translate into some trajectory or path that will be able must follow through the current map and then it will just control along that path and that's the final component one key point that's happening during this aspect of this part of the control is that the vehicle itself is doing two things it's controlling along the path as planned it's also changing its underlying model to be able to identify external facts such as when so here you can see in the video the robot is actually being. Own by a win that's pretty strong so what it's doing is it's looking at the difference between how it should be flying and how it is flying and from that it's identifying the influence of external forces and then compensating for that external force in a feedforward way so if you were to think about attitude stabilisation or just feedback control of a general robot you might think of a Ph D. controller proportional integral and derivative here what we're doing is we're doing proportional and derivative with respect to our desired state or desire trajectory to follow and then the integral component is essentially this feed forward aspect so they're not exactly the same but they're very similar and that keyboard components actually going to identify what those external forces are and then allow you to compensate for them. So there are some limitations to this approach and I do want to highlight those right now. First off we assume a rectilinear environment so everything you're going to see for the first part of this well in fact all the time really is going to be rectilinear What do I mean by rectilinear Well if I have an aerial vehicle and that aerial vehicle moves away from some identity orientation or a zero attitude I'm going to assume that the observation that I make here and the observation that I make here after reproduction are equivalent OK so what this essentially means is that the environment needs to can stuff to be consist of some type of structural walls like vertical walls some kind of structure and this is often called the two and a half the assumption you might hear people call it that as well so additionally as it's flying the robot is creating a three D. map based on laser observations so in some of the work you're going to see a connect sensor on board the vehicle and some of the work you're going to see just a laser sensor and so here this has just a laser and a camera on board so the camera as I said before is doing Lou closure the laser is what's generating this three D. and that is a consequence of the reproduction of that laser scanner so if the robot is tilted which is necessary. Larry for the robot to be able to move in X. and Y. then that observation is going to be reproduced acted and that and that's how the three maps created. And so just to give you an idea of some of the work that we've been doing with this we tried this idea out and test a lot of different field experiments and I very quickly want to highlight one field experiment so this is in collaboration with quite a few people. Have Pen who is also my Ph D. advisor and then this is also in collaboration with another people a number of people from Japan Tokyo University and so what you see here is an autonomous Mabbe mapping an earthquake damaged building but the difference is that you also have integrate in the ground robot so. After the earthquakes in Japan there was a significant amount of damage we went to Japan worked with these our collaborators in order to be able to map one of these earthquake damaged buildings we went to the seventh eighth and ninth stories of the building and we mapped that building using two ground robots and one aerial robot the ground robot carry the ground robot went through via tele operation and map the building using three D. sensors its ability to carry quite a bit of payload but there was specific spots in that environment which were inaccessible to the ground robots so another ground robot carried the aerial robot to these discrete locations the aerial robot autonomous they took off went in map those discrete locations came back and landed and then in the end we used the data from both systems to be able to build this three D. map of the environment so this is the seventh eighth and ninth stories of the building and in a second what you'll see is that there's a non-trivial component in terms of information from the ground robots which is right there. And then there's a small amount of information those inaccessible regions in the environment due to the aerial robot which is right there. But given our knowledge about how these vehicles move and how they move with respect to each other where. To fuse this information into a consistent now. So I don't want to make this any more the focus of the talk so I will leave this as the foundation so we we know how to take an aerial robot stick it into a three D. three D. environment maybe it's multi-story we know how to build a three D. map of that we know how to plan we know how to control with respect to that map so moving forward let us assume that that capability is there and now we wish to talk about three D. exploration with a micro aerial vehicle I'll start by talking about traditional ways to do exploration but I'll move on to talking about our approach why it's motivated why we chose to do it and its limitations and finally show some experiments to simulations to evaluate how how well it works. So let's assume that everyone here is familiar with the most basic concepts in terms of mobile robotics for just ground robots that when ground robots move through the environment they're going to have something on board like a laser or a camera and that will be it that will allow them to be able to observe the environment and build a map of that environment and quite often this is called an evidence grid and this is a very standard standard approach so one question that is of great interest and one of the let's say lowest hanging fruit in terms of autonomy for ground robots and aerial robots is spatial exploration and that is enabling the vehicle to actually be able to explore the environment make decisions itself and in doing so expand the understanding of that environment and this is what I mean by spatial exploration so the ground robot in this cartoon is observing the environment over time you can see the laser scanner here it's seeing this map and it's creating this representation and I'll explain what it means in a second and ultimately the goal is to build the whole map and what that means is that when it's in its term the state it's going to have mapped everything possible. There are many approaches to this and typically one is going to talk about things like Frontier based Expo. Ration information gain based exploration entropy based exploration and there are there's just really a wealth of literature here so I would encourage you to look at our paper as well as the references and really the references there and for these traditional approaches and so given these different approaches I want to talk about just one today which then gives us a stepping forward point for the idea that I'm about to present and that is autonomous or actually you know what I'm going to lay down some notation first so I showed this map a second ago. What we're going to do is we're going to call this the unexplored space the gray areas the unexplored space. This is the observed on occupied space so this is where the robot itself has validated that the space is unoccupied and also this is going to be the occupied space so these are very very common terms and there is a lot of other nomenclature for it you could think of it as the uncertain space of the unknown space you could think of it as obstacles you could think of this is free space but in the end this is what I mean by this and so. I want to make a note which is going to be important for later this is soon a dense representation on the order of and square so if I have an environment that's and by and and read that it's going to take and times and number and by the amount of information in order to represent that So the approach that I'm going to start by talking about is the traditional approach a frontier based exploration so this is actually quite an old idea or a relatively old idea and that is it's from fifteen years ago and it was proposed by Brian Young Mucci and the basic idea here is that a front here represents the boundary between an occupied and uncertain space so moment ago I talked about this representation so you have this is actually pulled from that paper that proposed in one thousand nine hundred seven you have your uncertain space. This is where you haven't gone you don't know anything about it you have your occupied space where you know that it's occupied you've observed it as such and you have your own occupied space and that's because you've observed it as such and the goal from to your base exploration is to identify the front two years in that space that if this is it it would expand your knowledge about that environment so here what you see is that these front tears correspond to the boundary between the unoccupied space and the unexplored space so by going to those locations you learn more about your environment and consequently expand your map. As I said there are many variations on this many improvements many ways to make it more efficient and there are even survey papers that look at different types of road here based methods and compare their efficacy so I would encourage you to look at those and there's multi Roebuck or nation strategies so there are a lot of different variations on this but one of the big things is that this approach and the other approaches that require a dense representation like this and by and squared representation. A lot of challenges when you move to three D. And that's what I want to move to now first up there is the question of how do we represent the unobserved space. Further is the unobserved space equivalent to the unexplored space so in the past when we looked out with our laser scanner and saw the world around us and we registered occupied or unoccupied space that's because the laser scanner could see everything but now we have a two D. laser scanner and so there is some natural discretization in time so here if the length L. is non-trivial. Then the change in the orientation the roll or the pitch here is non-trivial and you actually have unobserved space here between different layers of occupied space OK this is just an issue with having the two D. sensor in a three D. environment and this is a fundamental limitation that lasers will face unless they're naturally three D. like a bell a dime but even the smallest validations are pretty heavy and difficult to put on this. All of the micro air of the ickle. Further you're going to face this challenge if you're working with a camera or connect Also how do we represent the free space in this environment do we use a dense representation and if so how do we make that computationally tractable information based frontier information based exploration frontier based exploration these different dense based approaches rely on analyzing each independent grid value and because of that it's difficult to make that run in real time. So this is a representation of what I mean by these challenges this complexity scales with order and Q roughly. And realistically it's not liable for a real time operation on a micro aerial vehicle given current computational capabilities and that's what really this part of the talk is about given current computational capabilities how can we come up with an alternative approach that allows us to do autonomous exploration. And it also results in poor performance so because you have this kind of discrete ization that occurs because a these observations at at different angles at a destiny and also limited field of view due to your sensors you actually get these frontier goals that don't really correlate to frontiers at all but are more a function of that boundary between the observed and unexplored unobserved space so what we're going to do is we're going to propose an approach that uses a sparse free space representation and generates exploration goals as well these exploration goals we found at least qualitatively tend to be more consistent with identifying goals for further exploration and make more sense than trying to apply these dense methods so now I'm going to talk briefly about is to cast a differential equation based exploration algorithm and the goal of this algorithm is basically to create a lower computational cost approach that allows us to deal with these sensor challenges. And allows us to evaluate and think about autonomous exploration online So first off what we're going to do is we're going to think about free space not as a box of grid representation but as discrete particles OK the reason OK so these discrete particles are sampled uniformly at random across to the OR at random with a uniform distribution across the observed free space the reason we're going to do that is going to become evident in a moment and so as we fly through the environment we observe our occupied space that's going to be a traditional dense box of great representation and we're going to observe our free space which is going to be now just particles these particles do not represent something like a problematic distribution from a particle filter if that's what you're thinking they're just particles and you can see in a moment that we're going to treat them like molecules in a gas. So what we're going to do is we're going to go through the environment and we're going to populate through this free space these particles OK we're going to generate a representation of the free space in a sparse manner and we're going to generate a representation of the occupy space in a dense manner. We're going to reset ample this free space representation. And then we're going to apply particle expansion so you can think of this as molecular dynamics and the intuition here is imagine that you have a two. And that two has a hole in it if you have gas in that tube and you listen for where the gas is expanding and going through that hole you can pretty quickly identify where that hole is OK and what that really correlates to is a change and pressure consequently a change in density of those molecules inside that to OK those are the areas where if we go to those those are places that are perhaps unexplored those are areas that we go to and we can find out more about our environment. And so that's that's the intuition behind this and so what we're going to do is we're going to populate this free space. We're going to maintain this particle set to make sure that it represents across the environment the exploration of the free space and then we're going to apply a molecular dynamics or cast a differential equation to that particles that we're going to expand those particles we're going to identify those areas of highest expansion and from those We're going to identify our exploration frontiers. So. Here's the outer of them I'm going to talk through the algorithm but what I'm not going to do is provide the individual questions for everything I think at that level of detail probably is unnecessary here so the basic idea is that. If I have laser data and connect data coming in and I have a occupied an occupied space representation and I have a free space representation that is sparse. I'm going to resemble that free space representation. Apply this to cast a differential equation up to. And then extract the frontiers this is what I said a moment ago after that I'm going to have goals which I want to visit to extend exploration which I can send to the navigation subsystem and this is what we talked about a moment ago and we agreed that that problem exists or we have an approach for that which we can use so the key point here is that given observations of the environment now we have a strategy for enabling the robot to be able to explore that environment without need of human input or any kind of operator input OK So let's talk about very briefly the individual components of the algorithm. First the restamp when so if I'm just spreading these free space particles I want to talk about how fast I should be emitting these particles and what density I should be or how fast I should be emitting these particles with respect to my sensor characteristics and so we talk about this in the paper but the basic idea is that the particles are spread through the environment at some emission rate and then that is reset and called. Based on our sensor characteristics and so the minimum number of particles required to represent the environment does increase with time but that scales much more gracefully than order and cute OK And really the big question here is if I am exploring the environment there are there are two aspects I have to consider first off what are my minimum care capabilities in my sensor characteristics and what is the description of my environment and given that information I have an idea of how I need to observe the environment now to give you an illustration what I mean by that is consider a world in which there are many many walls around at a far distance and you're here located right in the center and you have a very short observation then you could drive anywhere and it's exploration and so there is a relationship between your strategy of how you explore that environment given your sensor capabilities and the distance that the walls are away and so we want to tie those in and in the context of the resample and that takes into account the capabilities of our sensors. Next once we have this sampled set of particles then we're going to apply this to task differential equation so think of this as just molecules which are all of a sudden given some amount of energy so an initial velocity with orientations and they're going to just basically explode through this environment they're going to undergo elastic collisions with the environment and then we're going to apply the simulation for some amount of time one has to think about things like time in length scales associated with the differential equation and we go into detail of how you can develop precisely how you set those values given the resolution of your map and the capabilities of your system so these aren't really parameters that just simply a consequence of your environment description and so now these particles undergo this die fusion you allow it to go for some simulated forward some amount of time and then you extract your frontiers here what you're looking for is you're looking for the spatial separation between the particles before they go through the X. before they die fuse and after so if you thing. Back to that to that example the particles inside that two are not going to expand considerably However the particles that started inside that to now when they exit the tube are going to expand considerably so what we want to do is identify those and from those we can extract that hole and then that would be the identification of an exploration front here that if when this it is we would then extend our map OK So hopefully the intuition has come across. We also have refinements or improvements so often in optimization theory one talks about the active set those are the sets of components that actively. Influence the optimisation updates so if you think about a complex hall you have just a set of points that really define what that context all quite often that is the active set or turn the active set so here we have something very similar we have those set of particles which are most likely to influence the update of this exploration algorithm and through that we can if we look at that active set directly we can actually reduce the number of particles we have to consider in our update and you'll see the influence of that later. Also we have the notion of goal cueing So here it's very simple we have done it by all these exploration frontiers we put them in a Q. and now we just go to the ones that are closest to us as we go to these goals we're going to observe them and they're going to disappear they're no longer goals because it's been explored. OK so enough of the algorithm discussion now on to simulation and experimental results so first off let's illustrate how the S.D. algorithm works very quickly and you'll see this more through other videos so very quickly what you're seeing is that as the robot is flying through the about environment what you can see are the particles you'll see that later but these red dots are red arrows actually correspond to exploration front years so if the robot flies to those locations observes those those frontiers they no longer exist in terms of the goal cue but. Consequently it has now mapped more of the environment it therefore identifies more frontiers and it continues to do this when it can identify no more than it terminates its says that it's completed the exploration of the environment. OK Now how does it perform against existing frontier based approaches so we choose the approach that is probably the. Best approach for front here based exploration and this is based on the statement that ever since two thousand and five nearly everyone has chosen this methodology and and from our perspective it's also a very good methodology this one proposed at the bottom in this work from transactions on robotics in two thousand and five if we say that that is the baseline that is the gold standard then how does our approach compare to that in terms of exploring it to the environment that we can make the comparison to three D. because these approaches don't really extend to three D. so it wouldn't be fair but what we can do is we can talk about in two D. and that turns out that our approach although approximate does work well it works close to as well as the best approach for two D. exploration based on Frontier based method now the key here is that. Front to your base exploration in this context is actually quite brute force so I would expect that it would do much better than our approximate method and here you can see that it does better but it isn't considerably different in terms of performance maybe you're looking at a mean that you across ten trials which this represents of ten seconds longer for our approach the key there is the computational cost but going forward let's talk about simulation performance so if we were to look at three D. exploration How does it perform well we just saw a multi multi floor example this is in simulation so we know exactly what the environment looks like and we can say exactly how it performs on the left you can see the number of free space particles required you can see the active space representation remember this is a subset. Of the particles which actively provide up to the most most informative updates of the exploration hour and so we can actually run that through if it does well enough then we need not run through the whole set OK so we can actually use a really subsample set of particles and the key point because of because we're in simulation we can actually tend towards really good environmental coverage in fact we move towards something like ninety nine point eight percent of the environment so we can say three simulation that we have explored absolutely everything except for point zero two percent or point two percent rather And that's a function of some of the heuristics that we apply so that means that if we ask the vehicle to map an environment it will map everything or nearly everything that it can. Here is an example of the application of the algorithm to a larger single floor environment this is again in simulations so this gives us the ability to benchmark it in terms of actual completeness and that's what we're concerned with here is completeness so this size of an environment would totally tank any naive extension of three D. exploration using these existing approaches that are dense this is a very large environment filled with lots of free space and lots of occupied space and to be able to compute on that at an online update rate is very high you would need a non-trivial amount of computation exceeding what's available on desktops possibly so we weren't able to get real time on pretty nice desktops and this isn't going to work on my career the of course so. The robot will continue to explore moving forward however what we see is that in the end explores the whole environment again it's something like a two percent and ninety nine point eight percent coverage or completeness and again this this lack of that two percent point two percent is due to heuristics which basically say it's such a small area it's not worth it. And again the number of particles required to represent the space does grow it doesn't grow as an cubed but it does. Grow and the other comment is that most of the time we can just use this active set which remains very very small so let's talk about experimental results so now looking at a single floor experiment and now this time we don't have the baseline as we did before because we don't know exactly the occupied space representation but what we're going to look at here is can it run in real time does it work does it cover the environment what are the number of particles required. And so as the robots flying through the environment what you can see here is the same thing as you saw in simulation the robot is building the three D. map it's generating these front tears and it's visiting those frontiers it's doing this via this algorithm it's sending those goals to the navigation subsystem and this continues at the very end it's going to identify that it has explored the entire environment and it's going to land so the way that this works is it's been asked to explore the environment or explore within some bounding box and when it has done so it should go back in land where land where it is depending on battery or or where it started if the battery is available for it to do so. So you can see that it built the map of the environment and again we have a certain number of free space particles and a certain number of active free space samples used and you can see that the occupied space or the occupied boxes are growing so we are actually getting fairly large environments. Now to look at a multi floor experiment and this will look very similar to the simulation because the simulation environment was constructed from the data collected for these experiments. From these experiments rather So here you can actually see the free space particles those are in green so you can see that these free space particles are or well a moment ago you can't. Let me let me back this up. There we go so you can see as the robot takes off. It's populating the environment with these particles. Now it's recently in that's where the reduction occurs this is this is the go there is for uniformity in the environment without needing extra samples and then it disappears there that's simply because it was turned off from the visualisation because it starts to moderate clowder the display but the basic concept is that. Some other things that you'll see here other he respects that we talk about in the paper first off we should fly where we can see and not fly blindly so we limit how we actually plan our trajectories within the field of view of the vehicle. The other ones as I said a moment ago we only go to explore environments are regions that are of a certain size namely ones that we can pass through there's no sense in going exploring a region that's only this large because we can't go through it anyway it's not going to add a lot. And so because of those Euro sticks what you actually see is the vehicles actually rotating as it flies to the environment so it almost appears as though it's looking where it's going. And in fact it is that's that's that's what those those here is to comply imply So this is shushing and he is switching out the battery it's a continuous Just a quick switch you plug one in you take the other out and then the robot says OK I have more battery takes off and continues exploring the next time it'll get done it'll go back and in a way and so that's different. So it is the case that is autonomous in the extent that you can turn it on and other than switching out the batteries it does its own thing. It cannot see those Glatt glass right there and that's why those blue foam items are there so that the vehicle can see those so for those of you interested to know if we can see the classic cannot that's why that's there we modify the environment slightly. So in a moment it will be satisfied. And it will go back and land. There you go. OK And the same thing can be said here the number of boxes increase in terms of the occupy space we are indeed mapping the environment if we look at the map and post-process we can say that we've explored it. We certainly don't have the simulation information that we did so we can talk about completeness to the same degree of accuracy. So I'm going to change things up slightly now so now we know how to release the robot to explore an environment in a search and rescue scenario let's say so it's just entered the window and now it needs to fly through the environment and explore so how do we put it at that window and have it continuously fly inside and then maybe fly out the front door fly around somewhere else and then fly back inside these kinds of things how do we do so robustly to be able to handle the fact that our sensors come in and out and I'll explain what's happening in this video but I want to just illustrate here what you're seeing is the aerial robotics taking off and as is the case in all of these videos everything is done on board the robot there's no external infrastructure all the sensory processing is on the robot and the robot has no knowledge of the environment so it starts off by just having G.P.S. information then it starts to get laser information and it starts to build the map and then it's asked this is autonomous navigation it's asked to go inside the building. So it plans a path to go inside the building and it flies inside the building now what's happening right here well first off the G.P.S. information is coming in while it's there and then it's not that once it goes inside the environment it's G.P.S. tonight. Second the laser information is being impacted by sun and for those of you who have used a laser scanner outdoors you know that laser scanners can be sensitive to interference from sunlight so you can have these different set. There's coming in and now so that's what I mean by robustly so we're able to handle multiple sensors and robustly. And this basically subdivides that's that statement so I have G.P.S. information in a global reference frame I have laser information into a local or relative reference frame and I have the statement that G.P.S. sensors and G.P.S. and laser can fail and this is just a pragmatic concern so the vehicle will be flying through all these different design zones you have G.P.S. you have G.P.S. plus laser you have laser you have G.P.S. plus laser and you don't really know how that's going to happen but we want to be able to handle that. Typically when ground robots drive through such a scenario they use odometry except that we don't have a down the tree here. So then we have to pose the question how do we generate a consistent and so by this I mean smooth. Or a continuous and by this I mean smooth and consistent state as to meant for autonomous navigation and exploration hopefully it's also accurate but at least it should be continuous. And consistent because the vehicle must always be able to fly which means that it can't be jumping all over the place and just because our sensors are coming in and out or moving between different frames we need to be able to have this consistent source of information so that we can do autonomous navigation exploration beyond that then we can start to think about how we merge maps across these different frames and so on so let's talk about that. So if the mob controls with respect to some kind of drifting or navigation frame and this is what we're going to we're going to say we're going to do we're going to create some kind of navigation frame which dress in time but it's consistent and smooth so this is our reply to that question. And we can figure out how to compute the transform from that to the global frame at. Any time. And we can figure out when our sensors are coming in and out and handle that in a an intelligent way. And we can merge the maps between the system between these different these different areas where the sensors are coming in and then we have the ability to do indoor outdoor autonomous navigation so just to go over that one more time what we need to do is we need to come up with a way so that as the robot flies to the environment it has some frame in which it can control that is consistent and smooth always or yes yes that is what I want to see and that we can always transform from that frame into the world. Hence we always know where that US. We also want to be able to handle the fact that our lasers can come in and now and so we may be generating a map but then our lasers may be complete garbage or they may not see anything we may have only G.P.S. information but then our lasers will come in again. And we want to be able to deal with the fact that that is happening so here's the approach. Following as before the occupancy grid the aerial robot is going to fly through and it's going to create a local map. That local map may be tied to another local map based on G.P.S. information. What causes one local map and I love another local map two. To begin is when the laser information becomes corrupted or noisy due to interference or no longer see something. That it can observe to create as part of that now. This is essentially when we switch from laser plus G.P.S. or just laser to just G.P.S. OK now we're going to have a G.P.S. only region between these areas and then we're going to see something again. And so now we're going to build this map another local friend. And we're going to control with respect to that and this will continue on and on as we go through we may notice that there are similarities between the maps and we'll do Lou closure through matching and so by closing the loop now we can take these multiple separate local frames and turn them into one globally consistent map or a cluster of consistent mouse. The way that we're going to do this is through the estimation of transforms and in particular those transforms are represented by basically a laser pointer has died like these arrows on the bottom. So these arrows represent the transforms between these local frames so if we can all mind as to make these transforms then we can keep track of this error that's a crew in between where the robot is and all these local friends. In our approach we're going to do this by incorporating into the state for an unsolved unscented common filter. OK And so what we're going to do is we're going to estimate all the typical things that one would do when they're doing this kind of attitude as some are posed as to mation state estimation with a quadrotor but we're going to append on this local transform. Now we're going to continuously be asked we're going to continuously estimate that and so because we continuously estimate this and there's a few extra things that we do that's in the paper but I won't get into here because we continuously estimate this we have the ability to transform from this consistent frame to the global frame which also means that I can go from the global frame back into the consistent frame and so what that means is that we can talk about planning a path in our global frame and that path will actually be changing in this navigation frame but even as it changes we're going to handle that due to this recursive formulation so let me use a video to illustrate what I mean by that. So this is what you saw a moment ago as the robot takes off has a laser only it's building a local map. OK you can see the. As to meant as far as the quality of the estimate of the covariance of the covariances pictured by this a live sword that is magenta as it rotates it now sees these trees it's not enough structure there for it to be able to localize So it's G.P.S. only. Now it rotates more in the seas the building this is another local frame. OK so what we're talking about is two local frames now once we enter this building that we're going to have that same consistent local frame because the laser will continuously work during that period of time if I fast forward. OK well. The variance is in the current state estimate of the vehicle so what you're seeing here let me explain what you're seeing here you are seeing the transform of the local frame into the global frame via that and that is what's in green or rather purple pinkish the green is the G.P.S. in the global thread so it would not be terribly meaningful you'll see in a second but I would not be terribly meaningful to see the drift in frame over time. But what you're seeing here is that if I take this current estimate transform that through this I can always go from the global into the local and from the local into the global So now I. So I have this by Jack to mapping much so now what I can do is I can take my plans in the global frame saying I want to go here transform that into my local frame over time that plan will change as that state estimate changes so even though I've asked it to go along a straight line that straight line will change depending on that but the key here the key insight is that we're deferring any jumps lacks of lack of smoothness or consistency to. It to the control from. What we don't want to do is push that into the control what we want to do is handle that in the estimator So if we control with respect to this consistent frame. Then we implicitly handle all of those issues so the vehicle is able to consistently fly without following and that's the goal. So in the end if we take all this transform that into the three D. map this is the consistent global three D. map. Just illustrate what I mean we have up here on the left the navigation frame. So this is that consistent frame that or that this is that consistent frame that's always there but it is drifting we have the global. We have the global frame which is consistent with G.P.S. this is that navigation frame transformed the at this time very mapping into the global frame we would expect them to be consistent and then we have a comparison so the navigation frame in a lot of ways is just like a dead reckoning odometry if I always control with respect to my dad reckoned odometry but I know the transform from the global into that rock and frame then I can always transform my path or plants into that. Additionally and we can look at position and we can look at the quality of the position as cement the velocity estimate over time and we take all of this into account in our controllers so we're doing a. Controller where we're scheduling our games based on the quality of the state estimate so as the state estimate is quite poor the games will be less stiff. So here is a slightly different environment this is out of fire training center of the vehicle will take off it will fly inside will go across multiple stories and then it will land. And here it's sped up a good bit so you can see. That it is handling the fact that it's jumping between these multiple frames. Everything as I said before. Has been transformed into a globally consistent frame so that you can observe it and make sense of it. But the map matching and merging will become apparent. Once we get outside. So you can see that there are separate maps being tied together loosely. And as we come around the quality of the G.P.S. is all that we have so our control becomes more loose and then. We have mapped out and then it lands. So I am out of time but I have one last slide and so today very quickly I talked about these three things and really what I talked about was some topics that extend or build upon autonomous navigation and try to deal with the consequences or the challenges of operating in three D. space with two D. sensors or limited field of view sensors limited computation and this is all due to the fact that micro vehicles are have this have this difficult balance between available power and what you can put on board. There are some papers associated with the work that I discussed the first paper is the work associated with the trip to Japan and the second paper is for the indoor outdoor there that I just talked about in the third paper is the exploration approach but only Again I want to say that shall she is code biased by video Kumar from Pan and I want to thank my funders so at this time if you have any questions thank you thank you. Partner. So the question is how how does the vehicle operate when you have possible occlusions or obscurantists in your environment that are brought up by your propeller So the simple answer is yes in fact the videos you saw were just environments so the one video of the earthquake damage building was certainly very dusty in fact things were falling off the walls as the robot was flying around and and. So yes that was dusty it wasn't dusty so that you couldn't see through it but it was dusty the fire training center was certainly dusty there was a lot of stuff blowing around and the other places were nominally dusty like an indoor environment it's dusty. So the issue really comes down to whether or not there's two things First often your laser read through that so we've tried to fly through smoke unless you have a multi Reed laser you're going to have a lot of trouble there and the second thing is do these things that appear in your map are they so significant that they appear and in the context of all this stuff dust that I talked about it's dynamic so it means it's moving which means that it doesn't build up enough evidence of consistent observation in the same location to appear in about. That what you rest your. Yes understood. So. I will jump very quickly. So this is the building in Japan. That was damaged by the earthquake this is the exterior the interior. These are the collaborators that I didn't mention So this is another presentation and so what you're asking about are did we operate in. Environments like on the lower left. Yeah that's where some of the videos were shot. Though I may be using the term incorrectly if that's what you're if that's what you're getting. No it's it's simply it's simply a statement of and bring up the cartoon it's. It's a reproduction statement so the requirement is that. Let me let me see if I can't. So the requirement is that the vehicle see the same thing after I transform it so if I am that this attitude and I scan a layer a wall and I extract some wall and I transfer and I'm not this attitude and I see the same wall the assumption is that under that transformation that will work now that fundamentally that means that we are operating in a structured two and a half day environment so some of our work right now is on completely removing that assumption but this requires that we've got a vision that based methods rather than laser based methods but that is the fundamental requirement here and that is a limitation. Or. So. So. I wonder I wonder if I'm not sure what you mean by the the by that so we have we have considered. So we have considered longer trajectories and I'm just going to try to slowly pan through across multiple stories so this is going down a long hallway going back up and in particular this area right here as I go around now this is Luke closure here you see the so the question is what is your source of orientation and there is of course going to be some kind of drift without the closure but if your source of orientation error is due your magnetometer then you're going to see drift that's consistent with the signal you know what is this yeah but not used indoors so Hear hear orientation is a function of laser. And I'm you of course I mean it looks really. He. It's. All. Yes So the secret the secret is in this and this is that I a C.R. paper that I point into and we've talked about it elsewhere and I mean we're really building on a lot of existing literature on this topic you're talking about attitude attitude estimation for these types of systems and and here really all you're the main things you want to do is first off you want to support our work we're using laser and I am you we're fusing that in through the the I muse through the process the laser through the measurement model and you want to online estimate your accelerometer biases and your gyro biases or attitude biases but this is a this is a classic problem. So we're only we're only Bennett. But what you're seeing is the accumulation of error in terms of your yacht through just that filter and that filter will accumulate in time without the loop closure. Yes absolutely absolutely and I mean I mean if you're using camera then that would be good too but you need something to externally observe the world yeah yeah I am used to it unless you have something like a laser ring gyro or something something very very nice. Thank you thank you.