Sure you a few examples of hypothesis testing problems in which there is some kind of a computer a structure which makes it interesting so we saw one the then click problem was so that these are some related problems in some sense so here's the first one. This is a hypothesis what does the problem which one of serves and I mention the vector and this is. This is a Gaussian vector OK And and it's a very simple problem on the dyno hypothesis this is just noise right there are the components are independent. Stand their normal random variables and the alternative hypothesis is that there exists a small set of the components so as that the means over those components are different or non-zero OK. So there's a small set of size K.. Of the components that are different let's say let's say this new is positive I can. Say we know this meal and you want to detect Now what makes this problem a little bit interesting is that we know that there is a class of possible subsets of size that can happen OK so not not not a little sub all subsets of not necessarily all subsets of the can be we call this contaminated. But we know the there's some kind of a structure it's OK So this is the hypothesis testing problem we want to be one such want to know is this just pure noise or is there a sparse signal somewhere where we know what the signal could be. And the question is given such a class of sets for what values of knew it. Is that action possible. Right is this clear so that the problem. Let's hear everything so the distribution when when the said this is the is the content here to us and this distribution is denoted by piece of S. OK. Other Westby subzero is the hypothesis so here are some examples so the when there is no structure all possible subsets of sites possible this is that this is a classical multiple distinct problem and then the statistical history teacher this has been studied very thorough it is very well understood. And may be. A richer or more and more interesting. Example where there's some kind of structure is when when they have a grid OK And every edge of the grid there is a normal random variable sitting and there is we know that the other defy POTUS is that there is a path from this corner of this corner on one of the bath over which the. The means are shifted again this was studied by various Gus through and Candice and then the whole OK another. There are all kinds of other examples of this type so we will see a few words about for example the an interesting example is when you we have a complete graph on every edge of the complete gov There is there is. There is a independent normal random variable and there is one spanning tree over which the. The means are different and we want to detect if if there is such a spanning tree looking. For. For each example I can I can invent the story but there's no. Some of them are actually meaningful sort of for example when we have bike Leaks You can have patience and jeans and. If there is they're matching is that or by click that some genes are expressed or exposed in some group of people and so there are papers to come from actual motivations of this. OK So so I test is just a function that off of such a vector and then the outputs zero one zero is really iffy if we think that there is no signal and the one otherwise and exactly the same way as before as in the first lecture we can we can define the risk as the sum of the two. Types of errors OK So the this is the if if the true distribution is zero what artists thinks that there is signal this is the type one error and the type two where is. The same way as before we find is the average of the of the probabilities so this is when. The set so there's a kind of for this base in. Set up OK So this is the average of the of the errors over all possible subsets that may be quantum in it. And writing writing it this way is is nice because because we know. Exactly what the optimal test is the optimal test is the likelihood ratio first so we look at for each possible observation vector we can compute the density under this mixture distribution and the density which is just the standard normal density and we look at the ratio so the likelihood ratio is written here it's the average over all all subsets and this is the mew mew is is the strength of the signal and X. of us is just the sum of the components over that subset of the likelihood. Sure as this nice form looks like a partition function OK so the optimal desk is the one that compares this to one if it's bigger than one then the accept the alternative hypothesis in the likelihood ratio is smaller than one then the effects of the the null hypothesis and the risk of the optimal of the optimal test is this expression this is a standard statistic exactly like in the first lecture this one minus one half times the expected value on their denial so the under the stand their normal of the absolute value up so the difference between the likelihood ratio and one OK and so so we are interested in the study of this. Saw in particularly if new is very small then of course there's naught test that can tell the two distributions apart so when new is zero then this equals one and this new increases the problem becomes easier and easier so this is the decreasing function. We want to. We want to understand when is this small for what values of Mueller's this becomes more. And this would depend on the structure of this class. Sorry k's the size caves the size of the of the sets OK So we have N. components and keys which is typically much smaller is the size is there. Is the size of each I assume that in every element of this class C. all the sets have the same size just. This is not necessary with makes things a little bit simpler OK so so an obvious test is just. We just some the wit this is really easy and if new is large then then this should already detect the signal right because if on their done all hypothesis this is a normal with me and zero on their the alternative hypothesis this is a normal with me with me in new time ski OK the variances are the same so so so this test will be able to detect them over me was bigger the square of the off and over squared so that's it's trivial. And it's interesting that in many cases this is already pretty close to the best best one. Another very simple test is the sort so called Scan statistic this so in the scans that this think we look at. We look at. We scan through all possible candidates and take the sum of the components over all those candidates so for example in the example of the spanning trees for each span it is possible spanning through there are many of them we computed the sum over the edges we maximize this actually can we can compute it quickly for the case of spanning trees and that and we threshed it so we're do with Threshold it well on their demo hypothesis is this is what we expect is the expected value of the of the mags in one of these standard. There's this is just the maximum of a Gaussian process that has some actual And under the alternative hypothesis Well this there exists one of these sets where there is signal so one of them should be at least new time ski so we just look at what we just threshold halfway between these two OK. So if new timescales much bigger than this thing then this will this test will work. And it will work because. Because this random variable is very concentrated because of the Gaussian. Concentration in equality OK so won't won't get sure that if new is bigger than this quantity the ratio of the of the expected magazine will of the skull same process over then then this test tube work is that this is the this this is the scan statistic just OK Now interestingly. In all all examples I know what holds these tools to do simple tests is is near optimal and you often mock up the cost of factors. So it's if you really want to know the optimal does then of course you hope we have to be a little bit more sophisticated but with one of these two thirst counting just just taking the average or looking at the overall possible subsets one of these two is it is usually good and so how do we know that well we can you can look at lower balance. And we do it exactly the same way as we did for the hidden click problem so so the the true risk the best risk right we had this expression in terms of the in terms of the likelihood ratio Well this is a complicated random variable but but Chevy shows in the car story a question what's in the quality wheel will help because this guy is computable very nicely this is just this is just the expected value of the of the likelihood ratio squared minus one and we can calculate that and we get something nice which only depends on the coming out or X. of this class so what do we get there the risk is greater than one minus one half times the square root of and what's this is the is the momentary think function of this random variables Z. which we often by taking our class of sets the possible sets big to at. And look at the look at the overlap. OK so we choose to sit independently at Randall look at look at the overlap and though we should understand the the woman generating function of that. It's really nice because it's purely coming after a quantity or that all the governors are gone and this is really easy you just write down what this is and that's what pops out. Yes. See is that C. is our class of all possible sets of Z. is so from a class of all possible sets I choose to work independently at random and I look at the over. The cardinality of the I count how many. The intersection. So that the random variable so now we can play right now we can we can we can try to understand the behavior of this. So here is a little curiosity. Which. So somehow we've come back here in the second half of my thought the sometimes when we when we study hypothesis testing problems we can see something above. Probably random quantities that we may be difficult to analyze that way so here's here's something go straight so they take new C. to B.. To B. this critical games when this equals to this. Right when when in this moment of the function equal to then this is one so this is just one half right it's greater than one half so that value there is at least one half. But we also know that the scant artistic statistic test performs better of them but they. This whenever a new is greater than that. So we have a lower bound but and we have an upper bunk so these are in conflict and therefore we get a lower bound for the expected magazine or for gas in process in terms of the strange quantity. So I don't know but there are people here who know about the gas in process I haven't seen this example and then there's some cases of course the expected magazine for girls in process this is very well understood telegram wrote think books about this but but but but the but computing what this is is not always easy and with this little trick sometimes we get things I will show you how one can get interesting balance for this yeah. I. Know there is or there's always some chance that you pick the same. So that it's it's always about the. Right so in normal clearly not always sharp but in a computer in some cases and in some non-trivial cases it seems to be sure. No but maybe maybe vice a versa it would Burke but it's not so clear so I it's not so clear OK so the floor bonding these things is what is this mythical packing of the of the set of this class and work over that the gods were bound. This is somehow in a very different spirit in which we think we think we think random subsets we look at the overlap compute the moment I think function and. So one way of when when one can get very nice lower bounds is. Is when we have a negative association. So suppose. Suppose that the class a symmetry just for the case of discussion so that means that when I when I pick two sets I can fix the first set and the distribution is not going to depend on what the First Cities So let's take one of the first set that we want to carry the first key components and take an S. that's random that that's chosen randomly. So then then the size of the overlap is just the number of these indicators right. OK Now in some cases one can prove that these these indicators are negatively associated with the IF IF IF IF the art then it's very easy to do to get this lower bound so the risk is great going to be greater than one health whenever a new is smaller and the square root of flog over squared. This just happens to come out I'm not going to show you the proof but it's really simple it's just comes comes out from the writing. So that negative association of these indicators or characterize it in terms of the class I don't I don't know I don't that. I haven't thought about it but here is a nice example. Spanning trees so in the case of Spanish This is true this negative association through Does this goes back to the case there and how they prove that the if we take a random spanning tree then and we look at two edges. Where the indicator the two that two edges are in the both of them are in the in the same spending through a negative we associate. So in this case well N. is so if you have a complete graph on M. vertices we have M. choose two coordinates the cake was and minus one because that's how many edges each a spanning tree has there are lots of lots of spanning trees right and the a minus to the scale is formula and and from what I showed you we get we get this Lord and. In particular we get we get the Lord so this is one example in which it seems difficult to prove a lower bound for the expected magazine of the process but with this we would be good that it's lower bounded by a constant. OK So it's interesting that. In this particular case the likelihood ratio can test the optimal test can be computed efficiently by because of because there's this nice algorithm of proppant Wilson that does you how to. Take the really simple algorithm to generate a random spanning tree such that the edges such that the probably of the tree is proportional to two to the product of the weight of the edges and here the weights will be just eat of the commute times the times of the normal on the edges so in this case there is a fast algorithm to compute the true well fast randomized algorithm to compute the true. True. Likelihood issued just. Here's another example. Perfect matchings saw So here here we have. What produces a permutation OK. Right. Imagine the complete bipartite graph on M. vertices maybe maybe there's another slight before you here complete by part that graph there is a Gaussian sitting on every age and and we have to decide whether there is somewhere a perfect matching on which the Muse are greater so there again M. squared is the is the number of variables M. is the size of each perfect matching there are many of them OK And in this case in this case it's the the the. The size of the of the overlap is just the number of fixed points in a around the permit issue so this is very well understood and wanking won't get Sure. It's a lower bound so our star is going to be greater in the one half whenever music is smaller than this constant tenth and the optimal test is also of the cost of the simple averaging test also gives some other cost and. So these methods because I think we would be losing question SCHWARTZ So we don't really get We never get the right constant but we usually get the right order. And in this case again the. The optimal test well is just amounts to computing the permanent of of Imitrex which is which is non-negative entries and there we have this famous people a general leader of the chosen hold one this get that can be computed in the. Polynomial time with a randomized algorithm OK so that's another case when when these complicated likelihood ratio this can actually actually be computed efficiently. OK. There are some other examples but I will skip them. So here's a here's the problem that. For all of you who look for hard and maybe hopeless problems so so the optimal risk is a decreasing function right as new becomes larger the problem becomes the the testing problems becomes easier it starts at one of goes down to zero now is it true that it all it almost always under mild conditions on what this set is it should go down like it's big and boom go down to zero is there a sharp the short. Sharp threshold I mean that if if new sub epsilon is. Is where we're this function equals epsilon where there is risk equals epsilon So we look we look at the the. The location where where the risk is already down to Epsilon and we look at the location of very still one minus epsilon is it true that these two are really close to each other how close were closer to them than this let's see the size of the middle point. And the No Yeah no no no I don't but when kids large I think this should be true. So yeah so. It should be the case that if the class is some are sufficiently symmetric then and then this goes down to zero one. We spend some time on it but when get anywhere. We'll be looked at theorems of that state but but the it's a Gaussian problem so it's not really in that framework so there are notions of geometry influences and stuff. Maybe there's the way but we can work it. OK for I think for the cases I think this is known because because you can really study what the optimal tests look like so when there's no communitarian structure but in general this could be interest. OK. So in the second half of the talk we look at a different kind of very related but different distinct problem so here we will test not so again we will observe vectors but here the the alternative hypothesis is not going to be that that the mean is shifted the meaning will be ZERO always but that the sum complements would be correlated OK so here is here's a motivating story we have a and sensors out there in the outer space and all of these sensors. Collect signals. And the Eye sensors receives some signal during the periods of time and. And under the law have thought this is this is all pure noise everything is standard normal but under the alternative hypothesis there are there is a small subset of the sensors that pick up a common signal. But embedded in hidden in additive noise so this this Whitey at time T. goes from one to D.. We have a common signal that the sum of the sensor speak up. Now can we for what values and this is just normalizing are my thing so that the. So that the the variance is one. From the start point of it all because we just normalize so that everything has the same. Yeah it's in that it doesn't really matter it makes the math a little bit simpler it doesn't really matter of course it it can matter if by simply looking at the but there then the problem is easy Anyways OK so if it matters then if it matters then it doesn't matter. So they're just OK you're right. OK forget the story look at them. OK. Or it. So these white These are normal with mean zero and variance rule and destroys we want to understand all can be and now. Again this is possible class of sets may have some structure I'm just going to know look at the case when the when there's no structure or all but you can you can look at the same story as before but let's just suppose that there is no structure or sets of size K. are possible. So here's the same thing written in a kind of different way we observe the independent vectors OK And under the law have offices all the components of these vectors are independent but under the alternative there is a small subset of components that are correlated the Paradise correlations are equal to rule OK so the the chorus matrix is looks like there are some. Some. Key variable size of the pair of us Croatians or rules otherwise you have zero OK So these are these vectors. OK So again a test takes this whole matrix these is the observations of N. vectors and and again this is the same story as before we look at the sum of the. Of the two types of errors now we have into ski possible sets OK are. There is a likelihood ratio There's that we can write down it looks ugly it was a big sum. But we can we can try to obtain a lower bounds now the usual technique that we used to using a question doesn't work this way I remember that our star is is one minus one half times the L. one distance between the between elephants and one and we be bounded it by but it doesn't work because this expected value in this case is infinite so we have to do something in the way to do it is that is that the condition on the on the value of the signal of the of those whiteys and then if we condition then it becomes something like the shifted mean problem that with on the right if we condition on. As it is if we condition on this then then this is exactly like as it was before but now the means are these whiteys of course these are random but with one one can't work it out there behave nicely so so that's how we get this Lord done don't look at the form that's not. Too interesting. Just that there is an inequality of that sort. And they get if we if we have now the class of if we have a structured class of sets then again this can be exposed in the in terms of the moment generating function of the exact same random variables before we can get lower bounds for. For the optimal risk in terms of the same coming out through a quantity as before. So one can read off some some immediate inequalities from these and there are some nice. Transitions So if if these small the number of observations is much smaller than a law again over rule then then the Lord that this cannot ever go to zero unless keys are very large are most true then. But if the is a little bit larger than this just a constant times larger than that then then the risk will go to zero Dr Mariska even if it is a constant. So so if these smaller than Logan were going to overrule then the then even if if large almost true then there's nothing you can do but if we go a little bit above then all of a sudden even fast and look OK and that constant this upper bunk can be achieved by just a scan statistic. So what's a scan statistic Well we we we go through all possible sets and we compute within that so that we compute there was correlations. So we are all pairs the product because we know that over the counter mean the set the expected value of this is real otherwise it's zero. So so under the alternative hypothesis when there exists such a set this should be at least well this is wrong times Kate use two times the something like. So if we threshold at half of that then we get we get that upper OK so this of course has to be worked out we have to understand the concentration behavior of such a quadratic forms or this is not too difficult. Yeah. That's a good question we haven't looked at that yeah yeah it's it's and it's of course it's a natural question yeah yeah. OK so so here is one way off of constructing get this OK so we have all these vectors OK we have. No investors in the dimension of actors OK And on the dyno hypothesis they are a pair of eyes correlations are zero on their the alternative hypothesis there exists a small number of them so that the pair wise correlations are larger. So we might set up a graph in which these vectors are diverted says OK and I would draw an edge between two vertices if the if the pair was imperial correlation is larger than some threshold OK And now we will look at this graph. Under the hypothesis these points will be randomly. So the probity of each edge let's see if I threshold this at zero then the probably an edge would be one half. Under the. Alternative Hypothesis there will be a small subgraph So is that such that the the the pairwise correlations are larger so they will be more likely to be connected. So this sounds a little bit like that hidden click problem. We are trying to find. A denser subgraph But this is not not and there Additionally nigger I because because now we have geometry all these points so the X. I divided by the norm of excite them what is the eyes they are uniformly distributed in the sphere. On the dimension a sphere so now we have these the points on the sphere on there are the not hypotheses they are random so it's just that this graph is just a so-called random geometry graph on this on the sphere and under the alternative hypothesis. As we we have a sort small subset of points that are somehow tend to be closer to each other. So in that your thirst is just a look at the size of the largest click in this graph OK. So OK good so this is a nice object it's a random generator graph and no one can try to skip these two slides it's kind of nice to understand what happens when when when the connection when we don't pressure that zero but with somebody else but OK let's just let's just concentrate on the case when. When peak was one half OK so what we do again we have these endpoints on the on the sphere and we connect to if the probability that they are. If if they are if they fall if the if there angle is less than ninety degrees OK. Right that means that the inner product is greater than zero OK so any two point there will be an edge between two points of the inner product is is positive OK now what happens to this graph Well if these small of the if the damage is really small then then this will behave this way around the geometric grab but as the increases. Then then this will become in there. And that's that's not difficult to see. As the dimension increases as we add more and more components to these Gaussians then then then what happens well. It's here it's written here it can represent this random grass so that I do know this that by G. bar in D. and P.. Is going to be one half it's just the probably three that they're connected and is the number of points and the is the the is the dimension so again. Here we are on the sphere units fearing the dimensions be thrown down and points at random we connect two points if they're if they're angle is is less than ninety degrees OK but we can represent this is how we represented know to be these is zero in this case. But these guys. This is just the sum of independent random variables. And. And if N. is fixed the number of points is fixed by the dimension increases then one can use the multifarious multivariate centrally me theorem to show that these guys will converge to normals and and they will become independent. So the distribution of this graph converges to that it should be some of the graph OK So that's that's written here the thought of variation this tense between these two goes to zero. Now of course you can ask hold large does the dimension to be such that this is small and there's a recent work of. Done where the prove that there's a nice threshold so if if he is that much larger than square then in cubed then this would be very small but if these smaller than cubes the this would be still large. Piece one half of a piece Constance because of costs of a constant P. Yeah if the graphs we don't understand anything. But for those graphs and P's one half were a constant then the threshold is somewhere when the dimension is and cubed. Right. Yeah that's not how they do it. They are. The of and the lies the Richard matrix and. Do some hard work yeah and for in they can in they construct tests so in on one for the lower bands because struck best clever tests and for the upper buns they analyzed matrices instead so I don't know if and by the look at the stands Methot its interest. OK OK But what we are interested in as I said that the for our testing problem what we are interested in is understanding is the is the click number right because our we construct this graph and we want to see where there where their signal or not. If their signal then then then B. expect that there will be a large click in this graph right if there are points that are more correlate there's a small subset of ones that are growth in that So in order to do that we want we want to understand the quick number off of this random geometry graph particular it would be nice to understand what how does this behave in terms of the dimension it is the dimension grows what happens. If the measure is fixed if for the damages to more or three or something like that OK then. Then the quick number will be large right because in that in the three dimensional sphere if I throw down and points then there will be many of them that will be the third pair wise orthogonal I mean. Their distance their pair ways angle is less than ninety degrees right because simply everything that falls in a sphere or coke out of angle of forty five degrees there they will form a click OK. So so I can't so the number is linear when these fixed then the click number of the size of the largest click is linear and then there are cliques of linear size but when the goes to infinity this. It will converge to the to the quick number of the additional in your graph and we know that that's only logarithmic in that. So we wanted to understand what happens as we increase the right from constant to infinity How does the this drawdown from linear to two logarithmic. And this is important for understanding the testing problem because this this will tell us how how long you have you have to wait until we can confidently. Say that there is signal. And so here are here's a list of of bounds will that go through what happened so as I said already in these constant when the constant and goes to infinity then we have cliques of linear size. And. It's easy to see that is soon as the goes to infinity the clicks are not going to billion years that's very easy to see OK now if the is much smaller the law again then it's still easy to see that the same argument will tell you that there are still cliques not quite linear but almost linear. But then something happens when these look squared of. So when these look square defining We can only prove this for the case of B.'s less than one health or but it should be true also for peace then these look squared so if these look squared then then then the quick numbers already Polly La Barre for me so there's a big jump between between those then log and look squared and it goes down from here and now if the damages look you'll offend then it really starts looking like the click number of the or the shrinking of it's to lock to the when thing and if we go a little bit further then we really nail down what happens with up to the top of the constant the other to fast and the click numbers. And there's this yeah sorry. Logon. Thing happens OK so so here this think those you something about what happens so here's a here's a lower bound for example it does you that if the is all going to the two minus epsilon. Then this is much bigger than anything probably logarithmic. So so there's a big jump between the two the log to minus Epsilon and and and look square and then it goes from some something like into the. Log. Through to some power. To do to two to pull you over think to look squared so there's a big jump here it also does you well here D. has to be at least a constant times law again but if it's a big constant term slogan then then we see that it's still it's still the polynomial it's and to some power so there seem to be two different kind of jumps one that long and one it long and when it goes from polynomial to two to sup on the normal and then the clock squared moment he goes from kind of stop on the two to put it over it. And then then it's movies out. So in the remaining six minutes I will prove all of these things. So maybe I can see a few words about how we prove this because it's kind of interesting. So we found this theorem by young from. One thousand the one. That for every set. Of diameter at most one. Right if we have a set of them there we can find a closer a closed ball of this reviewer's in which we can include those that. Are. And this is just a little bit better than the trick completely trivial. But that is that is enough for us because because we can use the the picture because there. So. It's so here. I'm not going to go through but basically basically what happens is that is that the click is just the set of of of a certain time it is right that that's what it means and now we can include it in a ball which is just. A little bit better than the trivial right and and then we can use and now the size of the click is just the it's just the empirical measure. Off of such a satirical cap right so the size of click is not upper bounded by by the theory that the measure of the sphere of your cup of which is this ball that contains our click OK but we know what the measure of such a severe cap is because we can just compute it and the end we can use the V.C. theorem that says that in that in the space. The. Difference between the empirical measure and the true measure cannot be depends on on the ratio of the mansion and and. Put these things together that's how we get. This light OK. Now. These guys. That say that if these large larger than look you burn or look at the fifth and this is this goes with the. This is just the first moment method. So. So if the number of clicks of size scale this is the expected then the expected number is inch of ski sorry this is in the in the other strain your graph times two to the minus K. half. And. What we can prove is that the expected number of clicks of size ski is bounded by ensure ski OK that's the same and this thing is some number which is not one half to the power of Kate used to essentially this number this is just the this is just the standard normal distribution function at Delta OK so if that is small then this is close to one half. And this is if there were whole small this does the world there is we have this this condition here and we believe that OK And how do you prove such a thing where this is concentration OK so if we have if you have independent. Independent points of the sphere Well these are basically normalized independent gal since right and we want to see what is the probability that if we throw them K. Galson as them the pair wise there will be pair wise better than orthogonal can and you can do this way by induction doing a kind of Graeme Smith orthogonal Isaacson at the step and then taking taking care of the of the errors little bit technical but not this is the the principle So what one can get an upper bound for the expected number of expected number of clicks I ski and then by the first moment method that we saw last time gives us a upper bound for the clique member. Again. Right. And yeah. It doesn't work is just falls short of for some of the Spirit to a cap that we get is just everything it's just nothing but yeah sure the morally there should be no difference but this prove that. And finally I want to see how do we get how do we prove a lower bound of this sort of gains. We have a. We have this random geometric graph. We have this random chance or graph and. Sure that the largest Click has to be large if the dimension is. Sorry what does it say. Yeah so if the damage is small then it has to be higher so how do we prove such a thing if I hadn't told you the story of testing this would be hard but we have all of us there so this is another example where we can we can draw something from our hypothesis testing formulation and see something about. What something complicated probably stick quantities Why because because if the largest clique was small suppose that the largest group under the the. Under this model is small Well that means that the detection problem becomes easy. Right because because now if we don't put in signal that we will that we induce a large click and then these this two distributions can be can be pulled apart very easy. So if their largest click was small then there would be a test but we know we have looked at this universal or bounce for the risk of the best test that see that that is impossible so we just put these things together. So just from from from the impossibility of finding good hypothesis tests one can prove lower months for the click number of such a graph. And the only little ingredient this is easy to needs is that it's a performance for that this under the of then that if I prophesy is the graph does contain a large click OK if the if are always sufficiently large though is this thing. Sufficiently large and and not that much. So that's it. Everything is written up here so you go to my homepage and open this and there's all the proofs are there and these are some of the references that they've gone to the results that I mentioned OK Thank you. You. Know well why because we are not good enough not skilled enough but. It might it might but we do it by induction and somehow Yeah. It might work if you're more clever than us but that would be. Of the click number. Apart from so human feeling this holds. So the conjecture that the conjecture is something like this is that up to Logan we have these almost an. And then shortly after logon it will drop down to enter the epsilon game so if it's a little bit bigger than Logan it's going to be already into the absolute Well it has to be at least that that comes from here and then then air and look squared then it it will go down up to look squared then it's still large. OK. Not not quite polynomial but but the but much bigger than the logarithmic and then then it drops down here to put a low earth make and and. Yeah there are horses here so I. Don't quite know all that it.