[00:00:06] >> OK So let me continue so last week I talk about 2 continuous problems. This talk still going to be something of. A except that I'm going to discuss one discrete problem. And similar things happen maybe you don't have to you know do a. Very smart way of lifting to canvass problems or you can probably do is just to solve the non-coms problem directly and on the nice statistical model is actually very simple algorithms work in a very impressive way OK so I'm going to focus on one discrete problem by Actually you can be significant to generalize to a lot of different. [00:00:51] Settings or for practical interests OK So the problem is the following so I have Suppose I have an unknown variables X. $12.00 X. and I suppose that each of. The variables is discrete and you can take most am possible values OK So from one to M. Let me ride here so I have X. 12 X. and each of them. [00:01:25] Takes value from one to up so it's something that force. On some finite alphabet OK so if you look at this picture of this example. Where X. I can take values from one to 12 OK. So I have each clock represent one value system so all the problem that I'm looking at is the case when you have a lot of such variables so imagine that you have a lot of different clocks but you don't know any of them you want to simultaneously recover all of the these these values. [00:02:02] Of course you want to do this job. I want to have some measurements and the kind of measurements that I'm going to focus on. The so-called pairwise differences it doesn't have to be this way but this is probably the simplest thing that I can use to to motivate my problem OK So imagine that every time I'm going to take a measurements the say between the notes I in the what I observe is some model 0 difference. [00:02:33] Module 0 some model 0 the alphabet size M.. And of course you have some noise is corrupted by some noise OK So this is a very simple model suppose I get a collection of such a pair wise measurements. As a concrete example this is probably one of the simplest example life to use as the guiding threat through all this talk is that so maybe every time when you try to measure the paralyzed difference with some probability pine knots. [00:03:05] You get to see the truth otherwise you see some random you flip a coin so you go through some run the measurements so I call this point on to none corruption rates because this is the probability that you get something useful otherwise you get something completely. Useless. OK So this is example of this model and my goal. [00:03:31] Of course is to recover all of the excise and based on this set of measurements and this one small small thing that I like to mention is that because I'm only measuring the pair was difference you won't be able to recover everything because you don't you don't really know what is the global offsets that for example if you add one to each of the X. I will give you the same measurements so I'll go is to recover everything up to some global offset. [00:04:02] So this is a model. So why is this something of interest to us one of the probably simplest example is something coming from community detection OK so in a lot of biological social networks you know the uses in the social There was naturally can be categorized into multiple clusters. [00:04:25] So for example in the stochastic blah model which is a very commonly used to a model social networks you might have 2 different clusters. And then depending on whether you are the 2 users are in the same class of different class that they might have different you know friendship. [00:04:44] Was the to cicle something like that and the communities recovery problem is basically a ball if you only observing some kind of friendship or similarities between pairs of users are you going to petition them into multiple classes so if you really have only 2 clusters and if there are similarities of friendship only based on the membership between 22 users and then this can basically big cats arise as one special example of the paralyzed differences model that I mentioned. [00:05:17] OK so if you if you're a biologist you and you probably have. Come across this very important pick ation in genomics this is something called haplotypes facing in genome sequencing so each of the human being has 23 pairs of chromosomes this is something that we already learned from biology one the 1. [00:05:40] 1 paternal one maternal OK so your paternal these 2 sequences are usually almost identical. Except for a few documented positions where you see some variance and these variants determine the characteristics of each human being OK So and these are the parts where the 2 sequences differ we call them snips OK. [00:06:06] So the facing problem is of all the following so I would like to know whether this. This variant comes from your dad's will come from your mom you inherited them from your parents you know we want to know where this comes from and for example what you for this variant comes from your dad can say OK you can code X. I as you know one otherwise you encode the as 0 so you one bit the information they're now in the sequencing technology sometimes they have this kind of very special. [00:06:43] Technology give us some kind of pairwise reet So this is sometimes they are going to simultaneously touch 2 of the snips and then they what they can only tell you is that they on the same as OWN IT OK So this is some kind of. Mechanism that physicists canto us now based on this kind of paralyzing information the goal is to try to figure out all of the information which they call the face information so this is a this is a very fundamental problem bar haplotype facing which is very important in developing some personal life medicine balance. [00:07:27] So moving away from. This genome sequencing another very fundamental example comes from computer vision of the graphics and actually sometimes structural biology so this is a problem about how to join see a line a collection of images which reflects roughly the same physical objects for example so now I have a few car models OK So these are some shapes and models that you have on your computer and each of the chorus is you know rotating some you know Abi truly now one of the very very important it's about how to join a line all of these these car models based on some features you have so this is one of the most important. [00:08:17] Task when you try to do any kind of fuel computer vision from the graphics to us. So what's come to the vision people are doing. A popular paradigm is the following so what they are able to do is that it's very difficult to join see a line all of them so what they are going to do is to 1st start by trying to align the pair of the cars so every time they look here pair of the models they tried to find some way using RAW features to align that which basically was a relative angle between these 2 cars. [00:08:52] And after that they were they are going to have a collection of possibly noisy pairwise measurements and then they are going to use a nother algorithm to join to refine all of these pair wise estimates OK So this is exactly one of the problems I described I described as the going to Lyman from pairwise differences. [00:09:17] And the other things I'm going to skip if you're interested feel free to talk to me offline to buy any of these. So how are we going to solve this fundamental problems. Very naturally we're going to start with some maximal like it was the mission So basically you suppose there. [00:09:40] Was the Maisha 1000000 pure coteries minimization So basically you're Let's assume that we have a collection of. Wise measurement was suppose that they are independently corrupted and if we assume make this assumptions we can probably write all some kind of people function for each pair each of the pair wise measurements so basically your Messam izing some love like you function which is superposition of all of these pair wise look like you. [00:10:11] There is an important each of the X. I take some of the street value so it takes value from one to M. OK. But this is a very generic MESMO like where summation problem in general this is going to be very very difficult as you can imagine 1st of all the log like you who function can be arbitrary OK depending on what the more noise model you have so you could be very very complicated so complicated that maybe it's not that friendly for optimization purpose 2nd or for. [00:10:45] The input space is sort of fly you know discrete star What's this give us to my zation problem which is in general you know all very very difficult to solve in a reasonable time and because of all of these the problem looks scary at 1st sight. We tried to make some progress towards this from trying to say they're actually not all of these parts though all not all of the problems like this are difficult maybe you for you if you have a nice to home all those this actually is something that's that can actually be solved using very very simple paradigm OK so. [00:11:24] This is a procedure what we are going to do the 1st step is to try to. Change the parameter space a little bit to make it slightly easier for us to do optimization but what we propose something extremely simple so since I have X. I running from one to M.. [00:11:47] Let's use a collection of standard basis vectors to reprise on them so if you could do one let's use the 1st stand the basis vectors to represent it. To 2 he was the 2nd base is vector so it distributes to here actually so I have a collection of them I'm going to use a collection of those than the basis vectors to represent each of the values. [00:12:12] I'm doing a little bit of slow lifting but try to do things in the higher dimensional world OK why do I do this actually this is not the main reason I do this the main reason is for me to in cold an arbitrary log like you will function in a slide for the entire more intelligent way OK so I am going to I have a complicated log like you who function L. for each pair but X. I have 10 takes M. values can take another embargos because of this I can actually enumerate all possible values of the log like. [00:12:50] Functions by M. by a mage's basically I can use a matrix by M. to in cold all possibilities of this look like you function OK just an example if you have this random. No you have actually this if this is the model the law like it will function can't just read like this something very simple. [00:13:15] But this can be obviously the law like you who function can really be arbitrary but we have a way to encode them using them by and. So why do I can see they are this you know all commented representation of the life you will function. The reason is that this together with our representation for X. I allows us to replace this look like it will function like it will function. [00:13:43] Via some quadratic form of case and this is this this is extremely simple thing but this turns out to be useful we try to map. Like you function to a quadratic form. And quadratic is something that we're more familiar with so that we like to start with. OK So and of course this is only for a pair X. I N X J I have a lot of different pairs so I can stack all of them together to build a huge major capital L.. [00:14:18] To incorporate all the information into this large matrix. And then this is basically saying that OK my original Maxima like you who estimation problem. Can be equivalently converted to this quadratic constraint quarter the problem. So except that the only thing that we do is we want to impose some discrete constraints. [00:14:45] OK So so far this looks like a simple formula conversion of the problem actually I have not really done anything actually so this problem is and these as difficult as the original so I'm not really making any progress yet. Fortunately. Because we're more familiar with the problem so maybe there are some messages that we already learned from. [00:15:12] Some simpler problems that can give us some hints about how to solve the problem but we're going to start with this illustrate how to solve the problem. So for problems like this perhaps the easiest problem is a principle component Alice's So I or I can the composition whatever so if you want to solve this problem a very natural way to do Paul method so basically. [00:15:43] Is my data matrices nonsense what I can do if you are not familiar with Paul method basically every single time I multiply my current iterate by this data matrix and normalize it and continue I repeat this procedure So this is sometimes called Poly to ration continue doing this until you converges and then we have the case of this particular problem this is tractable even though this looks like a non call as problem this is tractable and we can actually use this extremely simple paradigm to solve it. [00:16:15] So now I have a constraint he's a problem. Except that I'm adding some additional constraints here so maybe what we can do is to just also trying to do something like Paul method except that I'm adding some extra projection operations. To we asked to make sure that we are always staying within the space that we're comfortable it's OK I'm going to discuss this a little bit but basically this is the paradigm that we're going to use. [00:16:47] OK so now the only thing I need to. Specify is the projection operate it. So there are many many different ways to do projection I'm going through a single or maybe one or 2 versions for you one way is that since I know that. My X. I force within this subspace I mean this. [00:17:12] Vectors Sometimes I see the to say OK let's try to converse if I this set and then project to this converse if I see it and if you can versified this set of standard basis vectors he will give you a probability simplex. So he looks like this guy and then you can do some projection onto the simplex and then move forward sometimes if you are more aggressive you can maybe just projected onto this this this disc resets but for either strategy actually you can't perform everything in time. [00:17:47] And most M. look at and so it's supposed to be something that's fast. Practically it really depends on your problem sometimes this way we're worse off projection works better sometimes a hard project was. Already OK So this is MY them now the next thing I like to say is that this is my noncom a strategy but there's one more thing is there how are you going to start you out with. [00:18:15] I do not think this is really needed but this is the only thing I can prove I I'm pretty sure the random initialization also works for this problem but I come from so let me start with the simpler one and show that actually this strategy actually gives you a lot of information. [00:18:35] So I'll try to look at this matrix that I have built so under the statistical model that I have if the noise are ID and after some problem permutation you can show that. If you look at the asportation of this matrices this looks like this guy after permutation So this looks like a low rather mages except that I'm missing some diagonal components to OK So this is a possibility low run and the rest looks like some random put the British and because of this spectral math we can use the spectrum after to get good rough initial guess the way that we do it is the following so I have I have just data matrix I'm going to just compute a low right approximation of this matrix. [00:19:30] And then after we do that let's just pick a random column So basically you pick a random column. And you project this column onto the space you're comfortable OK So this is an extremely simple initialization strategy which we can guarantee that gives you some rocks in the show gets. [00:19:53] OK So in summary this is what I propose So the 1st time going to star we despair. Again this is probably not necessary but the most important thing that's actually I'm going to do projects to compile method until we converge is that this is probably this is extremely easy to do but it turns out that this strategy that works extremely well in practice. [00:20:22] So to illustrate the power of this strategy let me start by considering this extremely simple model that I mention and then I'm going to generalize it to a more complicated model so again just to remind you this is the random corruption model that I have mentioned each time every time you measure pairwise difference with some probability pine knots I observe the ground choose otherwise I observe some trash or some basically uniform random noise. [00:20:54] This is a result that we have OK So as long as you muse a parameter there you can choose in the. Operation party to relations part. As only you choose it to be not too small there were some that with high probability projected Palm the 3rd that we propose recovers the ground troops. [00:21:16] Within log and iterations and it's the total number of variables you have. As long as. I mean you can have 2 of the noise so let's say the signal to noise ratio is not too small and how to model that so that in this model I only have one parameter pine not the Pinal is 0 means that you don't observe any useful fade supine on these to be sufficient to launch so Pinal suppose Pinal is larger than the sky which I'm going to explain. [00:21:47] And then when the algorithm works for me. OK so let's try to see what this really means OK. So that's a log like you will function so what we are assuming are you know the log like your function. So OK So one thing is I was so now I know it is a lot like the full function. [00:22:17] Like. Something a lot like it will function so now I am assuming that I know the law like you who function for this particular model I don't even need to know this actually it turns out that if you are right now L. It's a very simple transformation that can completely get rid of this power so you don't need to know pine knowledge you don't need to know anything. [00:22:51] For this part if M. is larger than one is not going to be 0. So you can't guarantee that. Under these conditions in this country you can't guarantee that the 2nd law so the. Second largest single value is quite large. Even if it's at least 2 Emmys one is nothing to do. [00:23:19] OK So this is a result and we need to be larger than this what does it really mean OK so really recall that pine knot is the fraction of the measurements and not trash OK so what this mean this means that even though a fraction of one minus pine knots which is like this guy one minus that are all completely random which means they're completely useless and the algorithm still works but this somehow says that if N. is very large even though 99 percent of your measurements are corrupted you can still recover everything so. [00:24:02] At least this is. This is not really that surprising because you take a lot of measurements even though most of your measurements are incorrect. You still have enough information to do all the recovery part so this is. This is probably not surprising but the algorithm e-coli does seem to be something very positive. [00:24:24] The next thing is said this you only take a log in the iterations for you to convert each iteration you basically read the data once so. This is like the new time element. And all of this works it doesn't have to we don't necessarily have to use the initialization strategy that I mentioned as only you can find some rough initialization there like 50 percent of them correct and then you are funny. [00:24:52] Point 5 is also a crew estimate you can even make it like pulling night should be fine. OK impure Coley this is the face transition you see so basically we are plotting the. Success rate. Will I change the corruption rate pine knot and this is this is a fear I think or prediction and this is the practical the color problem is the practical of them so so basically you seem said the kind of match in a very good way. [00:25:27] OK So this is just for the. So this is the random corruption model and this can be significant see generalize in the following way OK so now I'm looking for general noise such to suppose that all the noise I J R I on the radio according to some noise distribution pine P P 0 so except for the ID part is simple to be the most general thing that we can think about. [00:25:56] For example minus X J With 0 let's say this is exactly the noise distribution in the C. It looks like part P. and P. Noy it's. Minus 6 J. the ground chooses one I need to shift a little bit so I'm going to get the shifted version of this distribution. [00:26:18] Sufficiently large maybe they look very different. So as you can imagine from these plots seems that the most difficult part is to distinguish this and this OK because we in this part in the distribution really look very close to each other so it's the hardest part for you to distinguish them. [00:26:39] And because of this maybe we can try to introduce some divergence metric to quantify the difference in terms of the probability so what we can do. Is to. Maybe introduce the Cayo divergence between Pete Pete not Tempe one P one means the distribution when the girl chooses one I could use K. O. divergence to measure the difference between the ground troops case and the case when the ground was like OK so I have a set top divergence to measure you know. [00:27:13] The pairs different pairs of probability distributions Now what we can say it's a following roughly under the same condition you can say that the proposer algorithm works within the lock and it's a ration as long as the minimum kayo divergence. Between you know and the 2 distinct distributions exceeds this OK so this is the cost in there which is for. [00:27:44] For this sort of makes sense because we want to achieve that is the job I really want to make sure there is a little bit of separation in terms of the noise distribution in order to make this work well the question is whether this condition of this is a terrible condition. [00:28:01] OK So let me start by saying that her 3 empirically this seems to match practice very well if I have let's say idea some some kind of Gallus in noise model. And then this is the red lines of theory and then the rest is same period it seems that this phase transition matches the theory very well. [00:28:24] And we can actually show that this is same formation theoretically optimal in the following sense so there really the theoretical curve is really information theoretically optimal so below this curve. Algorithm works above this curve there's no hour with them or so even if you are able to read the fine method of like you estimate it does not give you the rise solution and the trend and this curve really dispelled read really is captured by 4 log and over and so this is a very shop including the print costs and everything's fine. [00:29:05] OK so so 5 Focus on the case when the alphabet size is fakes or an alphabet size it's larger Actually this is actually becoming more tricky actually so suppose you know my alphabet sizes you know allowed to grow wheat and now what we can say for the random corruption model we can say that the algorithm works if it's larger than this guy this is not information theoretically optimal. [00:29:34] Information theoretically optimal one should be one over a square root of times and something like that but I don't think there's any algorithm and the SO No So far there allows you to do this job and I conjecture that this is probably competition though the optimal thing that you can do so there seems to be. [00:29:52] A very You competition no barrier between states is called limiting computation it's. All right OK So let me skip this part so coming back to my motivation and like to say that this is actually a useful algorithm for the problems I mention So if you have coming back to this thing so let's try to take a quick look using some. [00:30:18] Data from this shape net which is something similar to image net but focusing on the shapes part you can use some off the shelf algorithm to compute you know some kind of POS cos not necessarily like you who function but some it was cos there which actually is very noisy actually the data can be very noisy. [00:30:38] And after that you run with them you can basically live with them so it's not difficult to do. Practically seems to be even working slightly better than. Actually So this is somewhat to my surprise but the thing that's not surprising is the hour when the runs much much faster that even though now only they're only like $25.00. [00:31:03] It's already significantly faster. If you have more data instances there the difference is going to be even more striking than. OK so this can lie to say that this can also be applied to many many different kinds of application there is one problem call. For matching in computer vision basically I have a set of images of the same thing. [00:31:32] Each of them for example let me take 3 of them I can take several feature points on each of the image I would like to find the corresponding features across different images OK. So if you just ran some of the show of pale wise matching algorithm it's going to give you some incorrect matching actually so these red parts red line is basically representing correct matches if you just do a pair wise matching results the parts means you know the other correct one and then you can just use them to try to refine this all of these pair wise matching results you can see they are significant to reduce the number of incorrect matches for this problem so this is supposed to be very useful for some computer vision tux. [00:32:25] I have also some something happening in the M.R.I. I saw in the interest of time I'm going to skip this but basically this is trying to say that the Our them seems to be working extremely well for some fundamental. Kind of articulations. All right OK So let me talk about one more thing and then I'll conclude so this one issue. [00:32:51] Either in this problem or in the community texture and things like that there's one issue that people all often ignore which actually turns out to be very important in practice it's something core sample locality so what is this OK so you most of the prior works for this joint alignment thing or you know for 4th. [00:33:14] Detection most of the people assume that it is equally likely or almost that you clearly like to take measurements between any pairs of notes So usually if you go to a community. Talk you will see some picture that looks like this basically you randomly take a pair of nodes and you observe some measurements. [00:33:37] But this is a huge issue in practice actually more realistically you might only be able to know those are actually different you might only be able to take samples between. Knows there are not too far away from each other so so basically there seems to be some geometric constraint that precludes you from getting you know the ideal kind of random measurements. [00:34:04] For example in the genome sequencing applications you have you know you mind need to deal with 10 to the 5. Snips each read can probably only covers you know a ranch no more than 100 so what does this mean the in mathematics mathematically This means the following. When I'm taking measurements there seems to be some constraints there and this construct may be better modeled by some graph OK so maybe this is the half This is a graph meaning that I might only be able to take measurements when the 2 knows are not too far apart if they are if this guy is too far away from this guy then there is no. [00:34:50] Meaning that I might not be able to take any measurements between far away those so a more natural sampling model should be the following Ok so I have some constraint and then over this country and taking some random measure So this is the more realistic kind of. Measurement model that I can OK So basically it means that the sum pairs of notes you will never be able to take any measure. [00:35:17] To describe. Which model some kind of locality seems to be very important in particular biological cations. So this is what this study in Pyar works most of the people assume that there is absolutely no constraint so we are able to take free to take any measurement between any pairs of nodes but what I would like to argue that we should actually you can see the graph like this which has a huge. [00:35:46] A lot of constrained there which require us to change our algorithm according. To. Several question now when my naturally ask for example how many samples on really needed. In order to recover all the notes if we have such kind of the cavity constructs Now the question is if we can hope to do this and how are we going to change our algorithm we all that you know deal with this locality efficiently. [00:36:20] You though some. If you just use what PIO work tell you for example like. They can get some results but usually the result depends on something called spectral gap you know it's like. The difference between the largest and 2nd largest value of some kind of. The matrix which actually you know is a sufficient condition but I usually think that this is this is not the best way for us to do things OK so I would like to argue that if you can look for some of the specific models that are commonly used in practice maybe there are some nicer way to do that so maybe it would change ya with them a bit and then maybe we can get something like that. [00:37:08] And it turns out there for a lot of commonly encountered a graphical constraint we can actually still achieve. Efficient recovery in the reasonable time. In any kind of information theoretically ultimate OK So this is let me just start by looking at this this line graph this generalized like line graph and tell you how are we going to do this. [00:37:33] OK the soul. Simultaneously the covering everything seems to be difficult now let's try to maybe Class them into you know in this way so maybe let's look at this small cluster and then try to 1st see whether we can do something within this cluster. And if you look at this this small subgraph Actually this is basically the same thing as what we do for the. [00:38:01] Complete case and because of that maybe we can just repeat what I have mentioned before has tried to use spectrum after all maybe some kind of project Apollo method and to do this job OK so let's run some stuff from Earth to get some rocks in the short as to mates of. [00:38:20] OK. And we can do this this is also also the only just a small subgroup we can do it for a lot of different subgraphs for example. This operatives can do it for each off them. No The only thing is that we are scenes where users doing things in isolation. [00:38:41] At least one bit of information that we haven't figure out which is some kind of you know like global phase global or some kind of that So even though within the cluster maybe they seem to be they can be estimated in the roughly accurate manner across different subgroups of these some global information that need to be retrieved and because of this maybe we can include a 2nd step by saying OK so maybe we can just look at our estimate here in the US estimate here try to use a product overlapping to figure all this global information so usually the global information only one bit to bits. [00:39:24] Be fairly easy to carry a great schoolboy information so basically I'm going to. Gradually correct all of these global information. And then everything put together I don't have a very clean solution but I have a roughly correct solution so let's say 50 percent of them are are correct. [00:39:49] And after that I can just do a project a great project. And then all may be great in this sun or something else to continue to refine all the measurements so I can do some successive refinement converters. And the main results following actually even though you have. Something of that we can still make sure that this algorithm a lawsuit to achieve the information take optimality precisely including the pretty cost and so on below this curve nothing is possible even though you even if you don't have competition or constraint above this curve. [00:40:33] With them achieve. OK And this curfew capture by this quantity if you don't know what channel information this is some kind of like a channel exponent that you have in those channel kind of things OK for. And what this is saying therefore this graphic assumes that the information of limits on the competition limits actually meet and you can't get it precisely. [00:41:00] And they some kind of insensitivity phenomenon there which says actually this quality seems to be true for a lot of different spatially environ graphs What do I mean by that light for example if you have a complete graph you need the same number of samples in order to do this full ring you need the same time for the same sample size you can also go to a small world graph you know something like that roughly you also need the same number of samples in order to achieve exactly coverage. [00:41:32] And the information in computational limits seems to be identical for many of such spatially viral graphs which probably this is particularly useful for social networks and this is particularly useful for biology graph. OK empirically. Information This is the information limits you have and then these are some practical in peer coffees transition which seem to match the theoretical prediction in the Very well mother but it's very very fast. [00:42:08] Extension beyond the spatial environ graph you can have some variation which is not really a problem some kind of lines that looks like this you roughly have the same message except that now your sample complicity might differ a bit OK So this is which is not very surprised. [00:42:26] You can also go beyond repair wise measurements you have sometimes you my a measurement might simultaneously cover 3 different positions. Something about which I totally find you roughly use the same. Design strategy you can get everything in in in the in the precise OK so I'm going to skip the real data protests and let me just so for some statistical under some statistical models very simple noncom a strategy seems to be extremely efficient even though this is a discrete optimization problem and sometimes information theoretic limits tempi precisely achieved in linear time for a broad family of models OK thank you very much. [00:43:34] So it seems like. The poverty. Line. Is. That we. See. No it doesn't have to be done in the very smart way you would you just pick when you know maybe this one is the sub sub graph this is a spot for as long as there are some overlapping things is fine it doesn't have to be done in a very small way just a roughly correct so you do choose a sub graph to be roughly of the same size I mean otherwise roughly the same size and making sure there are some overlapping things happening and that's fine so the 1st wrong. [00:44:25] Yeah yeah yeah because this is the 1st stage of the 1st stage I only happy I'm happy with where there's a getting 50 percent of the very. Correct I don't really care about whether I get 60 or 70 because I can use a final round through to refine everything so the 1st part basically just to get something roughly correct and it doesn't really matter how accurate this. [00:45:01] Is. This is. Just the last one is pay per So it has nothing to do with theory. I have and I my application so this is not used for this one both. You mean the information for your other part. So if you only care about the information if you're at the park surely. [00:45:29] Both of them have so small section of all that actually that price fairly easy. So so I have a lower balance an upper bound the lower bodies found in equality actually it's something about and there are just analyze mass and like you question me if suppose they assume you know you can do it and that's probably take to one page each of them I think has something. [00:45:52] That this one is probably easier to use it just the model is easy.