Mr Dylan shell from Texas A and five X. writes I think we do this because this is a seminar. In frowned upon by one of the dogs repeatedly so. Then I was initially skeptical Mary and in the fall. The response to that was stark and very big part of a good. Way to keep school kids but. That didn't happen but we decided you know what that's OK we'll just get to that or exaggerate the risk getting it for John here because you're. Too much they all show up start snowing again no they didn't happen so we will see what kind of weather phenomenon we can expect during the. Deficit by the takes a nap you're right there on the right degree in applied not in South Africa and the. Masses if you provide. You With see this area has kind of been wrong defined by all the ignition systems with it bigger I think the interesting focus on inspiration from nature of could be physics or it could be pile of do you want me to like this up all. Night with the fireworks he's also done like Peter. In Texas during a show big old Shakespeare submit something like really growers are flying over the audience and everyone is terrified leaving their life. This research has been high degraded like the reward of this serving me so I mean those for the number of teaching or more service awards and research award is rare so it is good balance including the career work of the National Science Foundation what I really like Bob Dylan's work though is that he is in a remarkably easy thing. Some people just massive growth problem there is extremely good and more work thawing that problem and that's why it was a joy. It was Fox. So for that very happy to have you here we should also know that they're going to probably one of the contenders the best dressed man. You know that they're out. There or is there. Among their audience research or so that of the. Thank you Magnus was. Setting the bar very high in the other case being the best restaurant that's a pretty low draw actually clear I think. So as it turns out up about going to be a crucial portion of what I'm going to be talking about today. Which is a problem we started thinking about to try to think about multiple robots and we haven't even got to the multiple robots party and so I'm get I'm going to talk about things that involve a single robot or single robot an adversary or an observer so there's there are multiple agents involved as you'll see but so far only one of them is a robot not because we're not interested in multi robot cases but just because you haven't got the yet so and I should say one of the dilemmas is that I say privacy is pretty whereas other people say privacy is privacy so it's like if I say you have to interpret potato and tomato tomato I'm sorry. I haven't been able to change my speech patterns there so like most of us are about a psych Yes you follow the media and there's so much about robots these days I don't know how many of you were following the announcement of middle of last year when it turned out that i Robot had this great idea the robots the robots don't just do random walks anymore they're now build maps and they had this idea we could monetize that we have we have these maps we could we could we could sell that data here's a picture of call an angle looking specific particularly on this with his vacuum cleaner they're selling maps of the highest bit privacy dust up his room and make a mold something maps so as you might imagine although I don't know why i Robot didn't imagine this. This caused a storm online and some comments of people posted so it turns out data breach is the ultimate burger retool and the great thing is if you own a robot you're probably fairly wealthy so that's what makes it a nice home to pick on you just convinced me to never buy one of these robots. If the information is there be hacked by home and invaders or my personal favorite comment is this one which says this type of data will end up in the hands of others publicly exposed crooks deviants and governments so that tells you something about the sort of person who wrote that. So I'm interested in the question of how do we think about the you the straight off between utility and privacy the robot vacuum cleaners are more efficient because they have run of the slam algorithms so they know where they are so you gain some utility from that somehow collecting information which is profitable or perhaps information you don't want to you don't want to share how could we think about robots we want to limit what they can what they can know so what if as report of cysts instead of thinking about uncertainty as the thing which was always our enemy to overcome we always try to minimize or eliminate uncertainty what if instead we could cultivate ignorance that's the message that I want to censor let's try to cultivate ignorance and that puts me in a proud tradition there's a whole sort of market of people who are saying Ignorance is valuable so let me try to give an example of where I think ignorance for a precise example of where having a robot that is ignorant is the doing the right thing so I want the nice crisp example of this and the best example I know of is the panda tracking problem so let me explain that so you've got a panda. And this is my. Reminder the artwork is my reminded which comes from the original paper which was published by Jason OK in two thousand and eight this was before we started collaborating on this problem and so you will see some visit his great artistic renderings here so you have a panda in the pan as a point and it lives in a pool it lives in a plane so it can move around the plane and there's a robot who is caring for the panda who's tracking the panda. And it's following the robot using its sensor and it's communicating information about the status of the panda back to some base station and there's an adversary and this adversary can eavesdrop on the communication connection or may directly attack the robots may essentially compromise the sensors or or any of these things so in short the adversary will stop at nothing OK And we're interested in monitoring the state of the panda was we want to know something about whether our panda is OK It was a live but we need to be careful because we don't want to know the precise location of the panda will track the panda too well because if we do that then a poacher can either get that information on the communication channel or can just directly go to with the robot is search for Therefore the panda figure out where the painter is and do his worst so the model goes something like this each time step we've got a panda and we represent the robots knowledgeable of the panda is some region in space and as the robot move as the panda moves we don't know too much about the pen as motion but we know something about it's up about on its velocity so if we had some region which was a disk the disk is going to grow in size OK And now we have the robot in the robot has a peculiar thing which is a it has a a quadrant sensor so it has these quadrant associated with it in the quadrants it's a only does one thing which is tells you which quadrant the panda risen so in this case. Yet you get a sensor reading and the sensor reading tells you it's in the northeast quadrant and so you can eliminate the other possibilities you get but close to here you can eliminate the other possibilities and you know that the pandas in this in this region right. OK so let's make this task well specified suppose you've got some region that describes the panda we're going to call that and I state an information States or a belief state or a knowledge state and. We have a tracking disk which is a disk with a large radius and what we want to make sure is that we we want to ensure that our knowledge of the panda fits within this disk right so. Give me some notion of where the Pander is and when I say where I mean precise to within a tracking disk and in addition to that I'm going to pose a lower bound so I'm going to impose a privacy disk so whatever I state you have this green region this green region should be large enough to to incorporate a purpose to this so you can see this I stay to something which satisfies both of those constraints and then this radius sort of tells us something about the speed of the panda that's always how much this region grows each point in time so that's the panda tracking problem let's do a few examples here so here's a panda I put the. State here and I put the two radio in the bottom left hand corner bottom right hand corner there and has a robot and so if the robot decides to be entirely aloof this is what happens the the the robots of stays away from the panda we don't know exactly where the panther is but our belief of the panda grows the robot again stays away the belief with the pen there is grows and you can see that eventually we violated the tracking this right so you can't just adopt a hands off approach let's go back and try again so now if you say OK this is the panda I'm going to get a little close it starts to grow All right let me come in here and make sure that I have some sense of where the painter is well in this case I can be unlucky because it turns out that the panda was in this disc that I shaved off the sliver that I shaved off and that sliver is much too small for to satisfy my privacy stipulation so that didn't work either for that so what I need is for Panda tracking strategy what I need is the ability to judge the sigs back to the right so as the as the radius increases I need to be able to put the put the robot in a particular location where the smallest preimage of the sensor so the smallest region is large enough to have the purpose it is and the largest pre-image the largest region is no larger than. Than the tracking is so that's a problem in two dimensions and you can see the whole idea here is that we're constraining what the robot can know to satisfy this in a privacy just and the robot is actively positioning itself to cultivate this type of uncertainty this type of ignorance. And the question you might ask is OK well I've seen tracking problems are build estimates as are built controllers to try to minimize uncertainty what happens if. If I have both of these constraints are they actually always satisfiable So in this original paper in original paper he proposed a strategy and a question you might ask is does this strategy always work because the work for all the tree tracking rounds and privacy bounds and the answer to that is obviously no because one is a lower bound one is enough abound so if I make the upper bound smaller than the lower bound obviously it's game over before we even begin but if we're interested in those non-trivial cases can we satisfy these these ballons indefinitely. And the other question we are interested in is this quadrant sensor is a bizarre thing I've never seen that any other paper you can buy that off the shelf from from sick so how is the sensing capabilities somehow affecting the robots ability and the robots ability to achieve this task and I've already mentioned that by talking about the preimage aspects so we'll talk a bit more about that so I'm going to report some results from the space we published a couple of years ago now on on possibility results for for pretty preserving tracking and to do that I'm going to have to make the problem little simpler so it's the same set up we've got a panda but the panda now lives on the line so we don't have a one dimensional variation everything else carries through exactly the same I States now or an interval interval grows as the panda moves I put the robot somewhere and then I sense we're using my specific version of a one dimensional sensor is the line saying sensor is the robot is the panda to the left of the right I learned that and so on and so forth and the generalization is exactly the specific version of this is exactly the same except now instead of a disk I'm going to have an interval for the tracking disk and intervals are tracking about tracking radius for the. For the purpose e and we call the what we call something which always satisfies the constraints talk to trackable and something which always satisfies this constraint perfectly preserving. OK so now we ask the question we can ask the question but more precisely. That I said before which is is this Tosk always achievable so I'm not just going to give away the answer let's see if we can think about that a little more so I want to think about the space of all panda tracking problems and to do that I'm going to stop by laying out the privacy radius here along the bottom so that's the in a disk they start out small towards the side and they get larger and that's odd and I put the tracking down on the vertical axis and I really said well obviously this has to be smaller than this we're not interested in problems that are inside the white region we're only interested in the non-trivial ones there and the question is what does this blue region look like is all gray is it all red is it all green Any thoughts so it's a mixture it turns out that some regions where if you start with these particular bounds there's a strategy that I can give you can always track the the panda you can satisfy its purpose the stipulation and it's tracking stipulation just fine there's another region to actually disconnected regions here a grey region here in a grey region here and there are instances where the panda cannot remain you can't guarantee that the panda will remain safe always so why other why is that well right down in the in the little triangle here the panda moves through quickly so you can imagine a Panda which which has this delta in the region is growing and growing and you're trying to split that region but it's growing faster than you can split it and the panel sort of has a scape philosophy you can't you can rein in the path you call rain that belief state in foster that this region up here is a region where you have exactly the limit I was sort of showing before you can't split too small because otherwise it could be that you could get the small preimage you can't and you've got to wait till just the right moment so there are a set of these cases where there's that there's no solution. In this case and then there's this red band which are ones which depend specifically on the initial i state that you have so these ones are sensitive to the initial belief that you have these other ones as long as you start with a feasible initial belief you can always solve this problem. Two questions yeah. Yeah. Yeah those are yet. So. Large. Yeah so I just know which side you're on so I look this way. Yeah. Yeah so if so if I see the so if the if the pandas of I wake up and I see the panda over there and I move this way. I sit on the side yeah. And then I have to move kind of far enough over so that when I can see that you're on the side because that will give away too small a sliver so I have to move deep enough in to do that. You know to look you have to overshoot enough to make sure that you satisfy the dignity and the dignity exactly so that the discrete time steps are an important because you need to move enough in that discrete time step for the Yeah Yeah. Yeah. Yeah. OK so the the. So the yellowish brown region is just a fixed bound so that's an upper bound so that's fixed but you're exactly right there is another parameter and I was kind of hoping to sneak that under the rug which is that it does depend on the speed of the pen that we've normalized this to unit speeds of the panda turns out if you double the speed of the pen of the game is exactly the same you have to double everything Yes So the whole thing is scales so we've just represented with the fixed with the unit units panda speed. So we don't get so in our setting the robot can move any we don't balance how father a can move. OK so. Another question that you might be well that we were interested in at least is. So I should say we're not really interested in tracking one dimensional pandas Of course we're first of all trying it we're interested in to the shock of the audience and viewers. We were interested in just trying to understand for example whether the weather the the previous protocol would always work whether there are or where the exist such a protocol and so on so we're interested in generalizing this also to higher dimensions. And one of the ways we want to make progress there is to try to think about the how the sensor influences what the robot is doing and it's not exactly trivial how the sensors and the dimensionality come together but also describe that in a little bit if you can just just hang on for a second this let's just look at this plot was the plot that I just showed but now I have a little legend which I'm showing here which shows you the sense of all that we had for this for this robot so the robot is the dots and it only gets the left or right so if I want to think about a more powerful sense of than this I could do something like this I could say well previously this was my state I put the pen to put the robot there the panda's on the left hand side so I get the reading that it's to that side a more powerful sense of might look like this I get to put two dots down and whichever region the paneer is in that's the reason that I get OK And the second sensor is strictly more powerful than the first because I can take one of the dots and I can put infinity and I learn nothing from it so anything the first sensor can do the second one can do but the second one is more more powerful so if you analyze the game of there with two of those now you get something which is not entirely unexpected which is that you get the same stuff layout for all the panda problems except there's a region which was previously grey which has now become feasible so that's what makes sense right if you've made a a sense of which is more powerful something that was previously impossible you wanted to become possible and that's what happens and it turns out that it's actually just in this it's in the little lower triangle here I'm going to blow that triangle up and I'm going to consider a more powerful sensor as well so if we do that this is the sensor now with three dots and I've just blown up the triangle so you actually get another little step there and I can repeat this process I can give you an object. Of this form and eventually you get the sort of self similar fractal structure in this corner OK so what's a that's surprising to me because I thought if you make the powerful the make the sensor really powerful surely I'm going to win the game like it's surprising to me that there's any grey left over at all or at least that's what I thought initially. So we sort of sum up the theorem with the the pithy title that on the shins doesn't grant you a mere pittance right so you can sense everything but that doesn't help you in the game and actually that's not that surprising because the toss has been constructed such that knowing too much is not a good thing. And what is surprising though or addition is surprising to me is that these rigs regions these and these things which depend on the initial state information Don't go away actually that whole red regions that is there and then we're seeing sort of these dots of some additional red regions in there OK So the capability to achieve pretty preserving tracking in one dimensions is bounded you can't do better than that and it turns out that if you're interested in a higher dimensional problem then there's a reduction from the higher dimensional problem to the low dimensional problem in the following form so if I'm tracking this panda in two dimensions Well I have a quadrant sensor which can tell me something about a quadrant Well if I think about it a Panda which is living on a one dimensional line I can see that I'm interested in the number of places I cut the line so the cutting of the line although that seems like a very contrived generalization of the sensor actually is the sensor which allows you to do this allows you to argue that suppose there was a high dimensional problem that is Talk of tractable for any instance so suppose that O'Kane had given me a strategy which worked for two dimensions in all cases well in that case I have a two dimensional target tractable problem which must mean that the one dimensional sub problem in that is also talking tractable but I've just shown that that can't be the case for all talk of tractable one so I get a contradiction and so there's this relationship from the end dimensional problem down into the one dimensional problem and the thing which is most surprising here is that to me at least is that it's coupled in this bizarre way through the strange sensing capability so I'm not going to spend the whole talk talking about pandas of course in fact I'm just giving this is an example of a problem where we want to think about a robot that's cultivating uncertainty. So I want to sort of sum up the hash tag concept of all the pandas That's the bad news OK but to think about this we have some problems where it's interesting to to bound how little the robot needs in order to achieve a talk so it needs to at least know that the pandas in some region but we also want to stipulate how much the robot can or so it does it can't know too much and there's something which is interesting here which is the achievability of these tasks depends on the control ability of the robot's knowledge or belief or information States and as a computer scientist I use the word controllability probably incorrectly but do you know what I mean and we have a we have that information state and we're trying to move it in some particular trying to keep it bounded in some particular way. And specifically if you think if you look at the theories that we have to talk about sensor dominance for example there are a couple of them. Mason and Erdman have one Kane did one in with Lavelle in his thesis work they only talk about dominance of one sense of being strictly more powerful than the other because it can simulate them or it can turn out that you need to modify that because you may have some instances where the simulation is constrained and you actually learn too much information and that's a bad thing so let's move away from pandas and let's just come back to this to these observations I observe that in the in the text here those red lines are not spelling mistakes they're just things I want to point out they're saying they use these words data breach will end up exposed if the information is there so we've reached a point where there's sort of a level of skepticism about something you can claim if you give me your private data don't worry I'll encrypt that and I won't leak it people no longer believe that it seems that's the case so that's why I think it's meaningful to ask how we can constrain what robots can know and what I'm really interested in is how we can limit by design what the robot can so I can give you a robot and say by this design no matter what you do with the robot I can ensure that it won't get some piece of information that's the sort of overarching goal I think to to to make a strong claim on the box of the robot that that is you know it's not going to divulge your secrets to Amazon or whoever will you know have all pay the highest price so let's look at another example so this is a. This is my attempt at a far into H.R.I. it's not very convincing I'm sorry but it's an example we've got a little environment here and there's a person who has the mobility impaired and he has a robot that's helping him move around the world OK And the the robot helps this person move around the world the robot doesn't really dictate how the robot how the person is going to move but is there along the way and as a consequence if this robot is keeping track of information this robot knows a lot about the activities of this particular person so the question that I would like to Oscar is suppose you give me a filter you give me some representation that describes all the things that the robot could know so you look at the senses and you tell me what they could tell you suppose that I describe this is the street transmission system something like this and then I give you a previously stipulation so I tell you that I want things knowledge states that are type S. one and S. two to be indistinguishable I don't want the robot to know the difference between those two things then can you take this filter and can you do some course a vacation on some reduction so that when I give the robot that new filter by design and content the difference between one and that's the idea so once to provide a course of action that satisfies these constraints if one exists. One may not exist and made may depend so let's go back to our so you can also you can think about as a coloring you know I can I can tell of those the same colors and merge the same colors but let's go back to our example so now I've added a rosebush and a puppy dog and person who's living in the house. His horticulturalist horticulturalist and he's interested in his Rose Garden but his mobility impaired so he doesn't get to watch his roses all the time but he does sometimes on the good days he goes out and water the roses because he doesn't have only good days he also has a garden service in the garden so it will come around and will water the roses but we don't want to walk the roses all over water that would kill the roses that would be bad so I've constructed or maybe constructed is too strong a word contrived an example here where there's a piece of information the robot knows and that piece of information is vital for this task I need to know whether the robot or the person have gone into the front into the front yard to water the flowers in addition to that I'm interested in preserving the privacy of this person so. Should we say it's Valentine's Day So should we say whatever happens in the mosque the bedroom in the guest bedroom should not be any information of the of the robot right so specifically I'm going to make a stipulation which says I don't want the robot to know when I was in the bed last the bedroom when I was in the best get them out guest bedroom I wanted to be indistinguishable so I write these as constraints the first constraint is. I write these as distinct you know not equals two so being in the living room is should be distinguishable from being in the front yard so every time I'm in the front yard that's different from the other states. And whenever you think I could plausibly be in the last the bedroom you should also believe that I could plausibly be in the guest bedroom those two should always be distinguishable if you come to me and say What were you doing on the nights of the fourteenth of February at eleven pm I believe you were in the mosque the bedroom there should be an equally compelling story that says I know I was in the guest bedroom and vice versa that's the stipulation that I have so right both of these stipulations down now I just pay attention I've I've I've got these doors are locked the north patio and I've opened the east patio door and that's going to have ramifications because of what can the robot know and one of the transitions we get is transition system that looks like this where I've just collapsed topple logically equivalent regions so. You know the whole region is one region and these are separate rooms these these all involve transitions so i Robot is mapping transitions from the robot knows where we are and the question is can I satisfy the stipulation is there some way that I can reduce this or you can think of that as a coloring which will satisfy the stipulation so you might start by saying OK let's begin we'll just call everything a different color Well that's certainly going to satisfy the front yards the predation because if I see that I transition from a blue state to a green state that I know is in the water of the Roses everything is good but of course this violates this purpose the stipulation so your first response is probably well OK paint these the same color right so if I do that I paint them the same color does that satisfy the stipulation not quiets because if I see a sequence of histories that says something like purple orange then I can prove that you were in the Moscow bedroom not with the candlestick killing it must that someone but exactly so that's not strong enough right so that's not going to work but if I color these the same color OK now we're good right so every time I transition from here to here I see. Green blue green blue orange orange blue any subsequence I have pulled Will deniability here and I can distinguish this so that's great I found a satisfying reduction of the knowledge representation of the red wine So now consider the of course I'm going to adjust the patio doors here so I'm going to open the north want to close the east one and now you get a structure that looks like that. All right and so we ask does there exist a coloring when you say well I saw what you did the start of them all different and you colored them all the same let's just make a really Course thing so I might's whips I might start with this one here and you might think that that would actually do the job in fact in the paper it's given as an example of one which does the job and it was only when we ran the code on and found the counter example. And the counter example is important which is that if you see green blue you could have chosen from the front yard to the living room from the front yard to the master bedroom and from the front yard to the back yard but there's no way you're going to transition to the guest bedroom so this satisfies the stipulation of mixing the mosque the bedroom up but does not satisfy the specific stipulation that I gave which is I want the two bedrooms to be indistinguishable and in fact for this problem the slight change of this type of logical relationship there's no coloring for this problem OK so natural question you could ask is you know how hard is it to do this well OK it turns out that that's and be hard and that may not be surprising because after all looked like they were some graph coloring in you know we have an itch that you see coloring you think that's be hard Well I'm not actually saying it's be complete I suspect it's much harder than just an N.P. hard problem why because you saw the reasoning that I was using here I had to look of a sequences of colors as opposed to just pay wise color problem so we know it's hawed. And that's kind of that's bad news so we have an algorithm but I'm going to tell you about boy you with the sub optimal or exponential time algorithm let's summarize what we learn from this little example before we get on to the more interesting case so we lost I didn't put in exactly the but this is what I was effectively saying is there a way we can compress what the robot will take the robots filter and can we compress that filter so that it doesn't have any interesting secrets. So that if all you have all those compressed version you can just divulge that information all day long and it satisfies the bill ations and it turns out that on string that isn't easy but the good news is if you can find a filter like this you can use a controller that uses just that filter and the controller doubles that information you can write those colors to a log file and you can give them to the tech support and they can look at those all day long and they'll never know which bedroom you were in and you can send it to entrust the clout to do some computation and this is just an example of one case where we thought about reduction of these. Representations but I feel like this is missed something important and perhaps you feel like that as well so we tried what I call discrete discrete problems and hash useful ignorance isn't a lot easy so maybe we've lost the too much maybe the idea that just you never need to know any secrets is too hard a requirement and we've kind of seen this already the panda Tracker does not always satisfy the constraints right the robot has to move in a way to choose the circumstances such that the constraints are met it has to cultivate that ignorance is not just ignorant sufficiently ignorant of itself and we've seen this already actually we've seen this here I had one case with those that solution and one of the SAT solution and the only difference here was that I opened the one door and I closed the door so if the robot was not being guided by the hand but was actually able to control the circumstances it could just choose to never go through the North Pole and you know occasionally go through the east so control is really important in the cultivation of ignorance and I sort of spoke about the Sarabi when I said we talk about sort of the achievability of a robots knowledge base depends on control ability let's give an example of that all right so here we have a robot hero robot a so please forgive the grid but it's just a simple example but a robot and I want to get to the goal location the bricks obstacles and I put some constraints like this as you move from the bottom left hand corner to the upper right hand corner I don't want to know I don't want the robot to distinguish which of these states it's been at they must be indistinguishable like our bedrooms and more than that that I have a. Non equality constraint like we had for our front yard so never be confused about those two blocks you have to know which one you're in but you have to be confused about these three blocks. So that problem's not solvable like this but if I change the scenario a little bit right here and I put a banana peel right there and I'm assuming you'll understand how banana peel works right you've seen cartoons right you step over been on a pill you might see the folded you might slip back there's some non-determinism there you don't know exactly how that's going to work OK The second thing I tell you is if you try to walk into a wall you don't end up walking into a wall so if I give you that now you can solve this problem right you can end up with a strategy which looks like this move the robot step over been a lot of people OK well maybe you went forward maybe you went back OK now give the motor come on to go down now you're in one of three possible places OK great now you can move to the right so you just satisfied the indistinguishability constraints wonder over here OK now step up to the wall step up to the wall step over and then you're done OK so we see problems but we don't often see problems like this that tell you planning or motion planning narrow corridors the hard OK Well here I gave you are like a really wide corridor or you have to go all the way through the wide coral and the wide while doing that and there was a narrow corridor right there because what we're thinking about here is constraints on the belief right worth it and we're not just thinking about the underlying states of the system OK so how do we do this in a more general way rather than me just giving you some cartoons how do we talk about this OK So the first thing is to try to model the observer and now that I've put Observateur adversary's last photo because those are not we have an observer and we don't know whether they're actually an observer who needs to observe something like our garden service in order for the robot to be useful or where the adversary who is trying to figure out what happened on the evening of the twenty on the fourteenth of February so they're both of those things and we're going to build a filter to represent what they know and I've given to exalt I've got my adversary here does anyone know who this person is OK That's Hunt's Blix does that name mean anyone to. The weapons inspectors OK good so you're seeing what the next example is going to be so the robot is going to interact with the world by Janet sending actions and getting observations and then there's going to be an inspector or an adversary who's going to look. At the stream of interactions between the robot of the world and we're going to construct cases where the robot has to know something to achieve the goal and so it's going to be the case that what the robot is going to die of old will be a little less then everything the robot knows otherwise all we can think about are useful secrets we want to think about something where the robot can keep a secret for itself and have a reduced channel here so this is going to be a filter these are rose colored glasses right so that's a projection Downing color space OK All right so this work is all done in the framework of P. graphs which I'm not assuming any of you are aware of but it's some some work that we that Jason I've been working on to try to think about these design problems and the great thing is you can think about worlds and plans of observers all in the same language and filters. Like these filters over here you can model them as label maps I'm not going to bore you with the details but it's just the vocabulary that allows us to capture all of these in a single computational framework so let's just look at this again through an example so suppose I. State and I have an atomic processing facility a nuclear power facilities and I and I want to no longer be considered a pariah in the world stage so as a consequence I say I will permit a weapons inspector to come inspect my facility The trouble is that I have some proprietary information in my facility are there any nuclear engineers in the in the audience OK good so then I can tell you that there are two types of reactors the pebble bed reactor and a breeder reactor and that's about all I know about nuclear anything. But they have different arrangements OK they're in a building but you don't know what they're like but inside the building the details are different pebble bed reactors a distinct because they have they give off an eerie blue glow or so there's an observation here that will tell you that you're in a pebble bed. Facility and what we're going to do is going to deploy our robot here and the robot is going to provide some information that's going to try to do this inspection and we want the inspection. To divulge whether the facility is producing weapons grade uranium or not write so down here whether these question marks the robot's going to make an observation and the term and whether everything checks out or whether this some something suspicious going on and what we want the robot to do is to tell us whether the facility is green orange but crucially not to divulge whether it's a pebble bed facility or router reactor it has to hide this information from the observer otherwise they observe A The why is the country will just not let the inspector Hans Blix inspect anything so we have to say we will protect our proprietary information but it has to have any utility at all and it has to tell us which type of reactor it is so if I say here's a planning problem I put this in this world OK we don't know which world we don't know whether it's a pebble bed world or breeder world and also the robots a plan to just generate a normal plan is OK I'm here there's the goals that I'll just so that's no good because that didn't that there wasn't a specification or knowledge States at all right what I really need to say is that there are four possible worlds breeder reactors people pedobear reactors breeder reactors dirty reactors clean reactors and I'm interested in distinguishing the Kloss of green from Orange and no more in the I want this equivalence clause to be obscured so you can write down a specification for that is that if you have some balls for the top and bottom and you can say if I'm in the top here or the top here then it should be plausible. Sorry if if I'm in the top and it's green or I'm in the top and it's purple are inch I'm going to have lined all of a sudden. Then then I know it's sort of X. all but I don't know exactly which of these is true right so this whole clause must be true so it must be possible that I can believe the opposite that you know that the opposite case of the type of the facility so now if you watch the robot to build the plan to do that it builds a plan which looks like this this plan is conditional on what the sensor reading it receives drives up there if it sees blue it takes the blue edge. If it sees red it takes the ridge so if you just go back you can see what happens if you get to the Blue Ridge OK you've got to come down because you can't go through the no entry sign right you come down here you take a sensory if you get here and there's no blue You're great to go across and you can come down like this all right so that's Blix is really happy with that because if you run that plan great news you know whether it's orange or green doesn't matter which type of facility it is it's going to always make the observation either here or here OK turns out the adversary is also really happy about that as well because the adversary can count how many steps it took you to exit and knows that you either went through the red edge all the blue edge. And knows that as a consequence you're in the peril bit reactive us so that's no good either so I need to strengthen my specification I need to say that some of these things need to be indistinguishable so if I believe T. green tea I must also believe. Vice versa Right OK So then you say OK this is what I'll do I'll sneakily do this I'll go there and I double back and come back here and Blix is happy with that and and adversary can no longer count so we all seem good so let me just stalk a little bit about what this problem looks like as I've shown you a lot of cartoons basically there's a question of whether a plan satisfy the specification you have to describe the world the plan you have to give a label map that's the the sunglasses and then you have to describe a set of plans that are disclosed to the to the adversary so and then we write down the specification which is what I've shown so instead of planning problem that's our plan that I showed you a label map tells me how the actions and observations map to something which is received by The Observer So here all actions map to the same thing. Here except perhaps the exit and so on and so forth OK so all the other observations map to green but the orange maps the orange ones looks can get on the other side of his sunglasses you can see I saw orange we're in trouble here all right this is an example of a specification what is this six D. So this turned out to be the trickiest part of this whole theory for us. In the particular case that I've given you have taken this disclosed information as the Plan B. So when I said the robot the adversary could count that's because the adversary knew that red meant do something in blue meant do something else so it was actually using the label map and this information to infer that it must be the top part that's taken so this is the it's actually using this information over here in this case this actually all looks good but it turns out that that actually doesn't work if you disclose this particular plan to the adversary I didn't believe it didn't work until I ran the code in the code found the counterexample and then I spent two days the bugging in a found that the code is correct and the example was wrong and I'll show you why it's tricky. In this case what happens is that you learn if this is Orange you learn that this this type of facility is orange before you learn that this type of facilities aren't so if you look at the census stream that you will be get emitted from the robot you learn because in the red case you get that you get the vital information once you get the device information two steps earlier the fact that information is available earlier allows the adversary to figure out exactly which environment you're in so it turns out you need a plan that looks like this you need to wiggle down here so that the same time you get the same observation so that's just to give it illustrates one of you can be tricky to get this to get this right OK I've got a few minutes left I don't want to talk too much about the details other than to say this sort of a correspondence that we build between what the robot knows what the robot knows it's filter and the world these things are finite so you build the correspondence by building a Groff from his correspondence you check with the stipulations hold. And this tells you basically what the robot knows what the adversarial the observer knows about the world and crucially. What the adversary of the observer knows about what the robot knows about the world the second order information is actually encoded in in in this relationship which is which is which is captured so I didn't give an example where we had some stipulations on P. only ones and W. OK let's not worry too much about the details so it turns out that you can osc What if I give the adversary different pieces of information so in the previous case I gave an adversary I had a plan and I gave adversary that plan and then this or look at that plan and said You must be on the blue edge all the right edge because they had the plan Well I can construct a we can add the Seri by giving more information to the more possible plan so I can say to the adversary Well maybe they're not executing plan P. but they're executing either plan P. or paying Q. or I could say I don't know what plan they're executing so it could be any execution in the world or I could say they're executing some plan but I don't know what planet is so you can imagine taking a set of all possible plans and disclosing that So it turns out that even though the worlds are finite you can loop in a plan so there actually this is an infinite set of them so obviously I can't call a set solver with an infinite set but it turns out that there is a finite representation of something called a planned closure which will tell you that and it's a very simple idea you've got two different plans you paste them together you join them where they meet in the world that's the plan closure that generates all possible executions that are planned interesting enough it's not a plan itself because it has loops and. So so this whole definition here depends on making use of a language interpretation of what's going on in the world so I didn't say too much about P. graphs but basically they're built on a formal language theory of planning where we can represent as I said filters or planning problems of plans all by the same thing how do we solve this problem they get coupled together to give us a single discrete transition system that thing gives us a cookie structure which we can then solve using computational three logical some of the model Checa. So there's some work in making it actually do that. But if I study the whole time and told you about satisfaction of plans this will be the first talk ever that's ever been given on planning problems where all I told you was how to recognize a plan rather than how to find a plan we usually care about searching for plants and so it turns out you can encode this as well but there's some tricky miss you don't want to look over the set of all possible plans you need to look into a kernel of plans that we call what we call them the set of homomorphic plans and I'll just say one or two things about this because this is kind of interesting so you have a world you can represent the world as a peak or off and you usually think of a world if you want to get from if you go to a robot there's one around the world you want to get from some starts state to some goal state and that's what the world looks like you come in with this arrow and you go to choose actions and observations you might think that that's what all the plant should look like that's those tend to be the sort of plans roboticists and plan is fine but it turns out that that's a plan as well. If you execute this plan on this world this will solve the planning problem but it does it in a way which is kind of bizarre if you've ever seen the film The Man Who Knew Too little it's about a guy who's woefully ignorant but everything he does in the world just kind of works out and everything works out well in the end that's what this plan on the right hand side represents has fewer states than the states in the world and so it actually has to go around the world multiple times to finally solve a planning problem but it's an example of a plan we call a non homomorphic planning problem so you've never seen these before you don't care about them why don't you care about them because if you want to find a plan you never have to search over the set of bizarre non-home will fix any problems you have to search of a set of of homomorphic problems and that's a result that we proved in this way for sixteen paper but it doesn't say anything about if there are constraints on the plan and it turns out that this is still true if they are constraints like these dot information divert divulgence constraints so you can search over the set of homomorphic plants you can ask questions about you can put specifications on plan States if your is looking looking for the plan. But if you give me a particular plan you can ask me to find a plan to do that if I doubles that plan that's a space sickness that problem I can search of a plant from any possible plants and so on so one can actually find plans using the same framework even though I didn't really detailed how you do that it's essentially the same sort of search process that you can do in computational tree logic or it so the last thing I want to say which is quite surprising or least was to me actually in this framework you can search for the plant and the information disclosure constraints so you can search for the plan and the label map as well OK so my wrap up here is to say we built the speaker off formalism to think about design problems we somehow as researchers got distracted and started thinking about planning problems we then realized that there is this natural circumstance where you want to think about design problems because they have implications for what possible planning problems or what possible plans exist because those can divulge particular information and I think that for us at least this framework has really been helpful because it's showing that we're able to think about new things that we were not able to think about before we have kind of a new language is just like the. Gives us more precision to think about. The chief ability of goals and so on and it's useful for us in thinking about how we impose limits on the robot and thinking about adversarial settings so there's this whole history of people thought about perception and then active perception is a buzzword Well I'm trying to talk about active in perception so that's the the field that I'm trying to create if you're interested in working on active imperfection. In perception come join the fielding cultivate ignorance together. So I should say other people have looked at some other things that are related to this of course is this idea of space and or comment or filters there's this idea that actions can communicate right that's a little bit of what we've been seeing in action is communicating something is work on that there is a whole host of work not just so these are just two particular examples of differential privacy his work on synthesis of obfuscators all all this work is related I will admit that everything I've spoken about we've only got about this far I've given you these tiny toy little problems with small worlds it's not quite practical part of the reason we're using these model checkers is because they're very good at solving reasonable sized problems and we when we did sat we did it ourselves and we turned out we were not doing it anywhere as effectively as of course the people have been beating SAT problems to death for twenty five years so that's why we're now in the model checking space but I do think the problems in the space of thinking about what the robot knows there are interesting. And that's my final slide to take a minute spent a few minutes taking questions. I have I have interest in robot design in general if you haven't seen the robot designed game should go to robot design of a warm if you check out the latest version and get there is a robot here I'm caught in there now. Yep yep it's our check that in about a month ago the latest the latest version has that. Yeah he's any questions. So. What Yes. That's exactly the correct interpretation so in the Panda tracking problem adversary can do everything except manipulate the actuators right so I mean if you can stick of the robot to a particular location and the panda crosses over then there's nothing you can do. But if it infiltrated anything it wouldn't violate anything in this other case we had to deliberately put this label map layer in between because the example is constructed so that you can't take the measurements in a pebble bed reactor unless you know the robot must determine which type of reactor it is otherwise going to step into the no entry regions so we had to kind of pay the bill with Ted those two prisons apart because that it it seems that we couldn't we couldn't find too many interesting cases where the robot can not nose anything but still achieve the Tosk. So absolutely that. Yeah. Yeah so OK so these example worlds and so on we represent them as these discrete transition systems that we try to find an example of people off so what you have is you have. A system which has actions and observations and their label to use a motor controls actions and observations are labeled wise just like use wise like we would usually use and they're labeled with sets and we actually don't make any assumptions really about the structure of the that system right so. So it just to go way back to the part we were talking about a filter and then you try to build a course of version of the filter. I keep talking about robot design problems what we're trying to do is we're trying to stop thinking about OK here's a problem build me a filter for that and we're trying to build algorithms that can take filters and I put new filters so the idea would be that you could maybe build a very naive sensor or something like this or the most the most general sense that you could have should we have a paper on that you do you assume an ideal Sensa you build a plan for the ideal sensor and then you osc what corruption's of the ideal sensor can I make such that the Tosca still achievable. So those questions are they have to do they're fundamentally about real senses about how we can build a language which is rich enough for us to determine these things the the problem is that there's a limit to how so this so the law sets so if those all those observations and you try to put in a bucket load of Laser range readings and that becomes a really big set and that's not just a thought experiment we're actually doing that we have a peak RAAF which we've got from driving a robot around and there are some tricks that you have to do right you have to build some equivalence classes because if you run or if you run your laser around the room every sensor reading you're going to get will be unique in the history of all sensor readings that have ever been received you've got a unique one right so you have to somehow have the appropriate question back to the question about the the fact is that you don't care about. But if you're teaching you know undergrad robotics one hundred one and you're teaching them to follow a wall on the left hand side you know they care about far away that you know close and too close right so those three categories give you enough of a way to to to discretized that so I haven't quite answered your question which is how did I come up with a been on appeal OK so I thought about an example where what I wanted to do was to somehow impose some portion of the space where I wanted to ensure that I knew less than I knew otherwise and I thought how could I do that well that's a that's a crazy question as a robot is that it really OSS the question how do I get more uncertainty that's usually easy but in this little world that's how I chose to do that it may turn out that of course bumping into a wall that making a collision is a better way to introduce entropy into a belief yes in fact the first time I drew the figure out placement on him the wrong place one slot up and then it turns out you console the problem because once you go up to the top there's always some belief that remains of the top you can't get that down because there's always you slip back so it's it depends sensitively on that but that in some sense shouldn't be a surprise right we know that satisfied satisfaction problems can depend crucially on one term so at their core they're all going to have the sensitivity to suit some pretty peculiarities especially if you're doing something we have like posing a constraint some lower constraints right here really trying to probe the boundary of where it becomes feasible. Yeah OK. Yeah. To disclose as little as possible. So I don't have a I'll stop by saying I don't have a satisfactory answer your question but I so one of the things we should ask is What does the observer know and I didn't say a whole lot about different choices for observers other than at the end where I said well the observer might know it's plan people plan Q. of course actually when we're thinking about the adversary we construct the best observer that you can have and that may have a thousand States for this little problem Hans Blix is not going to run a filter in his head which has a thousand States he has a tiny little folder which says Did you ever see orange or not. So you can you can actually constrain what the observer knows. In two places one is how much side channel information has been divulged to them about what plans that they know you're executing and the other is you can intrinsically limit the the observer and they're not the same and they don't live in the same that they don't live in the same place but it can be that you can construct something which is very strong in the one and very weak in the other and it feels to me that this is a closer to what he's saying in that work because in one sense you're doing you do a lot of you do a lot of deduction from the background information you have to figure out what is likely that's going on Versus you have something which is very transparent and it's it's continually feeding you a lot a lot of information I'll say what's different in that work and also in Ross's follow up paper which has a very nice analysis of that. Work is that I think that the two things that that piece of those pieces of work are missing is that they are about. Communicating often things like something about which toss come to so the classic example was which cup of my reaching right. So that you can think of as a belief states like I have an intention to reach for this particular cup. They think about in this sort of joint action this communicative Joint Action Framework which I think is a little I think that's a little limiting because it means that there are there they are privileging certain types of knowledge whereas with us looking at the finest observe you can take any knowledge but I will as a start off by saying I don't have I don't have a very deep understanding of exactly how they all fit together you know I think that the label maps in their case is a very interesting they're a label maps They come from anthropology they come from what we believe when we see something so you know there's this assumption in the work that low cost. They If you're doing something which is high cost that's because you're in coding something about another choice so there's an assumption about the label map somehow is related to cost which I think is a natural thing when we do actions but we make no such assumption which. Is probably not to our benefit I think it makes our problem harder to solve. This. Thank you.