And what I'll talk about is the last so so what we've seen a lot nowadays is algorithms for high dimensional problems. And then you try to prove theoretical properties of prisms this is about inference and a more or less classical inference in the sense of trying to find confidence intervals in this TZ and I'll just consider a very simple case. So in the end will be to look at the as it will take normality of esteem a thirst in high damage and want to fess up to the dramatic to you can build your confidence into false and just. OK so. Let's start on the whiteboard. We have like on Tuesday but don't worry I'm not going to use to research their response variable X. Y. and dimensions and design matrix X.. Damages and Steins. And the linear model so why because X.. Does However this is the linear model Right Sector bita is an unknown. And I'm going to assure for simplicity that the noise that's to noise is standard Goetia. OK now I'm going to construct a confidence interval for a parameter of interest to one dimensional parameter you can extend that to higher dimensions and if you have higher very high dimensions you might use a multiple distinct technique but that's consider a one dimensional parameter of interest. Let's say that the premise of interest is to first component of the effect or Peter. Could be also and the other components are without loss of generality or it could be multiple components but for the moment or for today. We issue that we're interested in is to mating the first component. B. one. OK. Now. To paste in the further context let's say that the design is random. And that the rows of the matrix X. are Goetia. Lets. The rules of X. be ID copies. Of some factor X. maybe a little bit different no dish and. In this X.. Is say means zero. And some call Ferentz matrix Sigma. OK now just go back back back to the basics so both you can just do least squares. Then we know what to do right case one of. The number of parameters is B. it's both speed is less than and are less than equal maybe lets say peace fixed. And the rank. Of X. is B. Yeah. You can do the square stats like this. X. transpose X. inverse X. trends why the squares estimator right. And then you have it's. Distribution SO B. to have. SARAH So given X. say he's done normal zero. X. transpose X. inverse OK. And OK we're interested just in the first component. So. If this is the first component. It's just a first. It's just to the entry of this matrix in the top. Left corner. So we have to add square root and the least squares estimator of the first component minus the true one. Over in distribution. To Normal zero. One one. Where that's just in first of that matrix zero zero where. The third. He's sick. In first and this is just a point of course assuming that there exists. Now so the rank of Texas be so that it should exist. I think up and she went on I want every. Young. Little boy school. Boy maybe maybe we'll know nobody saw it if you write and see what you see or. So this is the it's made of it so this is already normalized Yeah yeah. I was so sick in my head Culver just a sick mind it's in first. Should go virtue I think if he's fixed now. So anyway now so we want to mimic that in hide them ancients. Be much Archer and. Now my main idea of the stalk will be what is the S A bull tick so this is the best until tick fair and said You call this to some dull tick fairy and. What is the efficient as adult experience of the estimates of the first component in the linear model Well we have to come of our lower bound and maybe you know it maybe do not know that it's in the low dimensions. And he put a tear thick SIKMA Ron Ron. East. Calmer lower bunk. If what. If you have no assumptions on the parameters of course. So I was saying you know that the parameters are in some space if you have no assumptions. Of. The parameters to hold space and then this is to comment hour about. Them or you just know to mark off what is called. For best linear unbiased estimators you can all better than the ghost Mark. It's far away from. So. You might want to mimic this and just showed it also hired him mentions you get some estimate you construction estimator which has this simple tick Ferentz and what I want to show in this stock is to sort of maybe also start a discussion and show that no in the height I'm ancients in high dimension sparsity so we don't have to we should something about the parameter. And so in high dimensions it's not the whole parameter space it's an issue mean that some. Sparsity like that some of the cofee chanst of the parameter bitter are zero. So I'm going to show you that in higher dimensions indeed this is not the comet our lower bound. It's actually a little bit larger it's going to be larger just by examples and that means that it's good news so it's. You don't have to be so pessimistic but you also have to be careful if you construct an estimator which has this as until tick variance that you're not supposed to cry out and say you it's it's official and. You have to be careful there so this is in the literature you see that often that people say it's efficient but then it's actually not OK. Just to get an idea about what can we do in high dimensions. We say OK. We first estimate. The parameter. And that's to be dimensional factor and I take for instance the law so and I estimate it's like that. And then I take one step Newton rests an estimator. Which he's like this. I have to I forget to write something sure. So let's just for to get it into it that you should that the matrix is no you know we take the estimator like this. And Ickes. And what's this. This is to first. Column or row or do you ever. See fish that for station matrix is to inverse to go fish matrix. And then this one is specified will tickle the normal under some conditions to me put them down. So it's really easy. You see work not. To put the lemma. Up. Suppose you have some rate of convergence for your initial estimate or. In L. one norms mall order one over screwed. Then. Square root and. Five by. OK I normally lies I should really normalize because if you think that's important I keep on forgetting it myself but everything depends on the stamp of size so everything changes with sample size if not I have to say that explicitly so I can't really write this because the limit the pencil still in the and so I have to dorm a lice OK. OK. Cost distribution normal. My. Question. So let's proof this. Proof. OK so you have B.. One runs one zero across the definition. There be to one. Plus. See one transpose X. transposed and this Y. is equal to to noice plus something us so it's epsilon minus. X. B. it is zero. Then. So I put to beat the heads together. You can write it as well this is the units vector of the first units vector transposed times told factor so I can write this. First unit vector and here you get minus. One transpose X. transpose X. that sick my hat if you divide by and. And then you gets. Lines zero and so this is. Transposed transposed and suppose here this transpose that is the first element and then this is just that instead. And then there's a conditionally all next linear term. OK Now let's look at this park this party specimen told the girl you can forget about it. So. You can write it as. Sick my mind is sick my head's. On. Transpose. Here so Sigma time stick to one. Wealthy toast in first so sick my thanks to one is to first units Victor now. And then. See if I can do it in absolute foul you just do dual norm inequality. He won. The super normal and here to L. one norm. One. OK And then maybe I should write it but there's no much base you know that this is small enough for. Me to write it here. That's order that's by assumption order one over square a little luck here. And here you have. The maximum of B.. Well at least some exploration going ferals in that they can get efforts and then you are you know that maximum of those things is always a foreigner all. Over and. So the whole thing is so small order square root of N. so you can use until Tex You can forget about it so disturb you that you can forget about it as until to call the foundations. And then this term. Just a final. Argument. You go up here. What about that there. So it's distributed given X. so condition you are conditionally on X. to destruction of that thing. In this. Of the. That term Get Next is just normal. Zero and then to go Francis Zito one transpose SIKMA had. One. I think I have to divide by and. Right. Not so sure. And then Sigma had converted to sick mind you know that sick man want to transpose Sigma. Three to one transpose Sigma three to one you see the wood transpose. The first units vector which use the two one one so this this term goes to T two one one. So that's the idea. OK. OK now this is a very simple result if you look in the literature of friends it in particular papers by your mom to ninety he proved. A result like this. With a not this condition but the usual condition so the usual sparse deconditioned that the sparsity. Is of small order and divided by a lot of people in the right appear. Maybe you've still remember or maybe you were there on on juice day. Well so far as to completion. Yes Yes I'm all order and if I know people where S. is to X.. S. just sometimes written like this. Just a number of nuns. Zero elements of fact be. OK so in the literature you can find results which say OK if I have only disk on dish and some technical conditions I don't use you in this then I can prove this. So the point is now OK so it's nice that you can prove that you don't need any conditions on the. Sparsity of the over the precision matrix It's all very nice but the estimator can be inefficient. OK. So there is some. Discussion about that. OK this inefficiency is what I want to show just a final remark. Remark. What is this first. Column off to receive to make tricks you can write it is. One here mine is gone my two. Lines go be. Defied it by. The residual sum of squares if you project the first factor. On all the others. Whole where. Going to zero is to. Project sions filter coefficients just projections. Are made in. One minus six lines. It's where. They are so this is a theoretical projection you project a first victory on all the others these are its goldfish and this is to residual sum of squares set foot square here and he's go fish in state occur. As the first row. After normalization that's one one. TEACHER one one is here so it's next to in first of this research yes and maybe you remember distance is not a new result but just to remember. There's a comma. You know. Squared and then the inverse of stepped on over. Yes I'll do it if you do projection is. If the projection this very small you have your left over with a lot and then you're happy because stand some tilt experience is small. This variable has nothing to do with all the others and this will be a very large Sure. And the little. You solve. This is not true with the others this will be. A small number. OK And also let me just before going to the slides. Just to normalize it and Hugh. From now on there on the diagonal. There are ones of these matrix. May be less or equal to one but let's say just equal to one just an army OK. OK now we can go to the slides. OK So dish the linear model. A no go fish is in the noise and the high dimensional case in front of him just OK. Right. As you look these so the noise is Gaussian the go variables X. are Gaussian in so everything is Gazans are used to fifty plus one dimensional Gulshan Fichter conditional good mean of why given X. is this linear function in the Goshen case of course. And then the covariance matrix of. The axis you shoot as an infers. We normalise that the elements in this matrix are bounded I'm doing a some thought to do some normalization say the diagonal is just all once an important assumption which I should throughout is that the minute more I can value of this matrix stays away from zero. Eight. Here's the last so. That's could be a good candidate for the initial estimator here. Right. And this is the device last so which I've written on the board it's now gone. So I just do a one step new term vests and that's only one step and this is that the river thief. Of the of the last function there if the function you're trying to minimize and this is its in first to his feeling you take the inverse of the heavy and so it's just really that. Big now you know in the high dimension see here. That this sick man Matrix was known in higher dimensions it's well if it's not known you have to estimate it and then if you estimate it's it's not so clear you can just used the empirical covariance matrix because it's going to be inverted. So you have to do something so that's why I put here till the here so I'm going to use two choices or several choices for this estimator Well you can think of that as an estimate of the first column of the perception matrix but actually what I want to show is that you should not really do that always you should maybe do something else to oped in efficiency but so far it's been that in literature groups that's wrong theatre one was. Always considered as some estimator. Of the first column of the precision matrix. In the years to literature. OK. OK this what we just showed. This condition then you have asymptotic normality with this S. until tic Ferentz T. to one one. That's a proof we did that. Now the question is what is the best possible as adult experience what is to come of our lower bound then you just go back. To your lecture notes or. You look at what is done in the finite dimensional case and you try to extend it to the infinite dimensional case and that involves a little bit of work because we are to talking about triangle arrays So everything depends on the sample size. In Ya. Have to be lead with careful there. But this is how it goes so you have a model for your parameter in our case it will be some kind of sparsity assumption. Then we're going to look in a certain direction. Normalized by square root of N.. That's all square of the event comes from the S. until the commodity normalization is what we want this direction to stay inside the model class we're not allowed to go outside the model class. And then you call an estimator T. It's actually a sequence of estimators regular. If well if your parameter is not be to one so zero zero say fixed but if the trooper enters not beat the serial but something close to it. So B. to zero plus something the small per to Bishan then this estimate there is still less until TACL the normal. Will. The same variance. Case. So death means if your model is so you have some B. to zero to fix some some parameter if the if this is not exactly true from the read you're in the little small neighborhood that you're still OK And so this is saying that it goes beyond pointwise a simple ticks it's something like uniformly in a small neighborhood type of a simple tick you want some unity uniformity in a neighborhood and if you don't have that you're killed also in practice because then you will see in the simulation that your X. until to. Use distributions do are not valid in finite sample sizes. So this is something to realize it's really a very necessary assumption that your one trick estimates to be regular It's something like unbiased or something like that that for some topics. So you're restricting I'm going to restrict myself now to regular estimators because these are the ones which work well also in practice. You have to shoot something because otherwise you can always find get super efficiency and say things like that. OK And then comes to Camarillo about supposed to estimate or it's as simple to call linear that means. It's approximately approximately an effort. With a term of smaller order there so that one over a square root of and from the central limit theorem. This is called the influence function. Is as I have means zero and finite variance. And I'm showing this trains is also remaining finite so it's order one for us and Ghost infinity and the Linda birth condition to have the central limit theorem. And regularity of the estimator then you have to scam our own. Some tilt experience is at least. This maximal over an Important to note it's the maximum over directions that stay within the model. Of and it's it's the squint. OK And this is just if you look more carefully it's something like this. What's written there. Just. You're trying to do to recommission of the first variable on all the others. But the you have to go fissions have to be such that you stay within the model class. OK so if you have an estimator. Which has. It's See what I write here. Which has an asymptotic variants which is exactly of the form for some direction within the model cost and you know it's efficient OK so that's. A tool to prove its efficiency. Now let's look at the next sample. Suppose the model is that my underlying parameter is sparse meaning that the number of non-zero coefficients is smallest and some value S.. Let's S. Ciro be the extra set of the Go Fish and we size Ciro So this serial is smaller than S.. Now if the number of not so if you have sparsity of the precision matrix. That's what this says. Then Well let's look here here it is. Here are these Go Fish and I suppose it's sparse in fact in fact most of them are zero then this direction Stace was in the model class and so you have an efficiency. And the state and some toxic effort the efficient variance is the one one OK So under sparsity assumptions this procedure gives should as a fish and estimate. If you have that the first variable that we're trying to estimate is act if and some bit I'm in condition you have to think a little bit. And if effect where is it. You're at the boundary so to speak of parameter space then the as simple tick efficient as until it experiences to one the same this to one where you know which variables are active. OK. So that is in sense bad news because it means that you will not be able to reach the karma of our lower bound because you don't know which variables are X. If you cannot mimic that at least that's my conjecture of course you can mimic it on the additional assumptions but if you have Dish no assumptions you change your parameter space in the coming hour Bolt also changes so that you ended up in the loop and you're cheating yes so this seems. Not faring much to something that can be achieved. Although people do that what people do is they do the last so throw away to variables which are ethical fissions equal to zero and then refit and then do. Behave as if as if they know back deck to ferry bills and then just say OK I have for this until to variance. From from this type of efficiency point of view if you're honest then that's not a good idea it's not a good procedure. And we're generally if you so this is just to point out a little bit that with this kind of sparsity assumptions maybe this kind of our lower bound is not the chief as soon as you have known sparsity. OK Here it's the chief maybe that if this one is not sparse. And it's probably not cheap. Now let's assume we have a weaker form of sparsity that the L one norm over to Go fish and see is bounded by something like square this. And that the trooper understates away from the boundary that means the true parameter as is not exactly square root of S. but the little bit smaller. Then the common are lower Bonet is just minimising this residuals on the square so you try to project X. one on all the others but the GO fissions have to be bounded by square of the face and then there's a square root of N. because of the normal I say shouldn't any school year. Here so that the erections are normalized by square root of N and that square root of and appears here as well. OK so you minimize this under L one restriction and this is more like a lost soul type Wolf estimated and really projecting this is really projecting. And here you are not doing the exact projection because you have an L. one Norm restriction. And so if you want to improve over this factor. Then you must have to. The true production this gum effect or rather Or maybe this one does matter. Is very non sparse so its L. one norm should be a larger ordered square root of and because if it would be smaller order you can just do to standard projection and it's OK. So that's my idea I want to show that if this thing is not sparse. Theda one one is not the common ground Yeah exactly. And then this is in between it's gets a little bit technical but it's got I think interesting So does this week sparsity that you sure are should write that there are is here go fish and just something fixed between zero and one say one half. So this is. Generalizing those two cases here are a sequel to one here are as equal to zero and here are as between Ciro and one. But if R. is not zero you get this phone to gain you have to issue. To restrict yourself when you try to project that you have to restrict yourself to. As you put here are probably I think there's a mistake here vectors with this are Norm I should put here are three our. Spector said this find this condition. OK. Finally let's go to some another notation. If you have a sequence of random variables I want to show and which distribution depends on parameters I want to show that something is true uniformly in the parameter because if you want to construct confidence intervals. You need that right it's not just for one parameter but for all of them so there's a soup outside the probability. OK. So government zero it's approaching action of X. one on the other side. And that's just this is the formula for projection. Standard formula Suppose X. inverse extra Suppose Y. but now in a disguised form because it's different notation. So this is extra Suppose X. inverse but now it's just all the other variables transpose and so on it's always a minus when you. Don't take the first variable here. OK. Now let's start with the sick man no now. So I'm going to shoot that we have. Eligible bear that is so to speak gum a zero is the thing which is say not sparse. The true projection and I'm going to approximated by something which is sparse in this sense. One sense. Love the sharp is here something small depends on the situation and gum up sharply something on spark something sparse and if this is true she will dangerously be called a spare and she will pair. It for such pair we can sort of mimic the first call column of depreciation matrix like here you take one and mine this. But now sparse approximation and divide by two receipts will sum of squares but now something like a sparse version of that So here's to residual sum of squares Yes So this is just. X. one minus this one X. one squared expectation of that is equal to one that this one. And minus these other quadratic. OK it's written in there so it's similar as stare here OK now I'm going to do simple splitting that's just to avoid a necessary condition said to make the proof simple you can do without sample splitting even then it's like in the paper by not means now do you need additional technical assumptions so let's just as existence proof do it with a simple splitting split the simple into. On the first half you you take your last so as to make the for instance. And then C. So here it is the first half you do the last so and on the second tough you use. This X. and Y. fairy boss and then you reverse the roles you do on the first half the X. and Y. variables and on the second tof the last so estimate or so then you have to D. biased is to make this like I've written was written here and then I average them and then you have the following result supposed these last so as you makers are consistent in prediction error so to speak. In a quick enough rate for. L. one norm. Then you have as I'm told the Clean Air three uniformly in the underlying parameter. And a simple to come manatee and now I forgot the square root here but some tilt experience is now this sharp. And not seek up one one and sort a synthetic variance it's not broke don't usually smaller than three to one one. That's if you look at this condition there's nothing that prevents you from taking lump those sharp equal to zero where if you look at the dish and then you take love to sharp equal to zero you just take the musher are equal to God must zero and then let's see then this is automatically true just as just zero. And then the sharp is C. to one one so that's a special case you see there are no conditions then any more on that zero one Norm you don't need this condition result in those you'll. Go. Now. Looked as sharp and just take a tooth for a. Look the sharp equals to zero I can take it you're allowed to take it equal to zero and then you get black back to the usual by a month or not but now I'm of course interested where it's not going to zero with this maybe very small. And then you may improve over the result tonight by getting the better of a simple tick Ferentz. So. And so on the condition of this error you see here this is a simple to clean the error think you. Have this is just linear in the. In these variables so it's as simple to call the linear and regular. And so it means that the comma of our lower bound applies this is false within the class for which the cameras are lower bound to hold So if I can show for this is to mater. That this variance is smaller OK if this is did I want to show them that this is just simple to fairly efficient a simple tick fairies. So let's look at the next sample suppose you have L's hero sparsity So the number of non-zero cofee census bounded by S. and S. Is this usual condition that S. is a small order and over a long P. that's to usual sparse decomposition. Then you get this rate of call virtue and for the prediction error so disco's to zero by assumption. And this rate of core virtue is for a one hour. NET standard literature theory. And then you want this time square root and lump the sharp to go to zero so you want. So there's a square root and here it's cancels and this is the condition on loved a sharp. K. if you have L. one sparsity then again dispersed should be the standard form that should not be too large if you have only one sparsity you get. Consistency but no rates so-called slow rates. And here maybe not so you need a really strong assumptions on luck to sharpe. And. If it does go my sharp so dispersed approximation of gamma zero the true projection is sufficiently sparse we see that the cameras are lower bound to achieve and it's better than the one they're better than three to one one if the true projection go fissions are very known sparse So in normal arches and square and in this. Oak and you can extend it's two. Zero one and this is are between zero and one three years a little table. So for known Sigma. So this is zero. Sparsity the conditions in S. are the usual conditions. The conditions on the slope the sharp are like this. And this is always by assumption true for an eligible bear you always have if this is true you have some tool to normality and if this is true you have some tilted efficiency. OK where in this case a simple to deficiency is with some quote Well not really because this is something that depends on Colson's of the model calls and also on the year to year to some just actually depend on the true because zero and here I only need to show that the Truby to zero states away from the by. Andris So this is a very next situation this situation is not so nice. If you look more carefully you see that it's sort of fits had the condition. Sparsity of the sparse approximation of the projection. Is of order S. here and it's this S. is this S. There's only a long term here and here there's no look term gap here is she square root and this square root and this there's no local Gap So they they come back and if you look at where is it are between zero and S. It's to say meso what you see here is conditions on lump the sharp and they are to same here in these conditions there's no look there's a look at between S. until that normality and simple to get a fish and see so for a simple to get fish is unique need a little bit more but what you need more is only local arithmetic it's only this locked term which is not here. It's. OK. Now I don't think I have very much time so just quickly what does this sparse approximation of these projections so it's a projection itself is not sparse we do sparse approximation and the case we consider is where you have a sparse vector. And the true protection is like a. Least Squares version of the sparse one so if you do if you have a sparse regression model and you do least squares without any assumptions then the estimator will not be sparse. And this is sort of what I'm having here so they've got my Sharpie sparse I do approach action more or less I can get. As to details and then I get to restrict the square stop of projections and then this is to make almost zero. And this one will then not be sparse so if this is true and this error term. Some fire fairy and she doesn't go to zero then the. Feet of one one will be much larger or at least the difference between the two will still stay away from zero So yes and it's a politically known finishing improvement over three to one one. So just this is saying I have examples. Namely these type of pairs which I call eligible for these examples I have an improvement over three to one one. No I don't have a general result that you what you can do. So here are some some work this is too technical for this for the late hour so if you do some calculations you get an improvement over three to one one but then this true comes here to two projection has to be known sparse. OK if the matrix is unknown. We do something which everybody already does that doing the law so. I so the government zero is the project. Fissions of the projection of X. one and all the others in theoretical sense if you don't know if you can compute it because you don't know Sigma you do you replace Sigma by stick my head. So you just do the projection. Of the first. Column of X. on all the other columns and then you use the last so because if you just straightforward projections because of the high dimensionality one variable is always a linear combination of all dealers because and then you get just projection doesn't bring anything so then there's no residual So you have to do regularization. So for instance the last so it's listed. So there you get a gun my head and then. The biased estimate and it's maybe go back to where is it. Here So here you have your freedom and so we know this is the general notation we know with the snow in sick we place despite it up a shark with unknown six might we be placed his by Sigma pets and sick my head is the loss of. Until we get this that's what people do in practice because in practice to go variance make use of course not know. OK that's what we do here. OK. So you have. One like there one minus come ahead the last so divided by the residual sum of squares and plus this is quite natural if you look at the. At the formulas I thought I would explain it here but this is more or less the residual sum of squares like there but now for the last. OK So this is the most estimated with here to last so no. Issue this condition which is also here one overlock P.. Consider an eligible pair. Some technical conditions. Then you get a simple tick well not be there Spears a little point here you get this expansion with a remainder term which is a small order this expansion is not does not give us a little bit linearity because this is a random thing so this is not a linear function this is not an effort this is something with a random part. But you still by slip ski steering you get some topic normality and you get this until to the one one shot. OK So this is something. Depending on what you want I mean our common are lower Bonet issuing that the estimator you're considering is essential to clearly linear so if we want to say something about this simple to give fissions here we have to. Have strengthened the assumptions so that we can have a simple to clean your. House so that's what this says. If you are sure that your low so estimator of grandma. Is. Converging fast enough one over squared lock B. then you have a simple to clean the air three and then you can see OK the camera our lower bound applies. Not so if I wrote this kind of assumption in the paper and then the referee says something similar the referee says you don't need that assumption because you have a low Slutsky Sam is a yacht but I'm trying to prove a simple to clean your. And there I need this assumption and so that's a little discussion of why would you want a simple to seniority Well that's for that kind of situation I have my camera lower bound and I don't otherwise I don't know. What to do OK. So I'm running out of time so let me just give you the table. So this is for unknown Sigma and for unknown sick might you need to do this sparsity assumption for instance if you have else zero sparsity Square would look a square root and overlook. So again this is a sumption. My referee said you don't need that assumption look at the papers to not only but. This is some Sion is necessary sumption almost there is a big O. I think here if you want to proof a simple to clean unbiased miss if your estimates are so there's no way this is. Very truly a necessary assumption for a simple ticket I'm biased in this as soon as you start thinking statistically not point wise but uniformly you need to search kind of conditions for the case unknown if it's no no but if it's unknown Yes you do need this condition. OK then the other rates and you see again. Something for a simple tick you're mad at the essence of the Clean Air routine now you need to issue an additional things and simple to efficiency you need to stay within the. Parameter space but that's this kind of assumptions. Sigmas unknown I have to be sure that my sparsity description the week sparsity is really for are less than one for are equal to one you don't get anything any more you can see it happening here. Where is it or is it or is it so if R. is equal to one you can't HEAR THAT S. should be off order one overlock be so do that makes no sense OK. It's just not good enough so it. L. one sparsity is of course very weak form of sparsity you would like only that but it doesn't work in this case. So if you are less than one and then. Hasn't thought of normality. In the fish and see if they have these condition if you compare these two they're not even though one is not a subset of the others it gets a little bit messy they're OK Let me see. Yeah just very quickly some stimulations this done by Francesco or tele. Just so that you really shows up in the simulations. So here the C. to one one is five and the see the one one sharp is one. And the first parameters equal to one and the number of unknown parameters and the number of active parameters is fair you. Pay So you see that it's least if you have more X. the ferryboats. Difference doesn't show in the beginning it for a larger sample size indeed see that a simple to variance just smaller than the five that you would get if you would use that procedure there. And if the group underlying parameters is not active Well you do the last so as an initial estimate or so you're sort of profiting for that and then you see that there is no worse starting years older as some thought experience is always already quite good. According to the theory so it's one and you see a simple experience she's already quite clear close to that OK. Thank you very much you. Were. You know we're. All. Like. This. It is a standard thing in this kind of theory because you see you have some. Maybe it's written here somewhere. You know if you look at this here. So you just have some something something in there or I don't know yeah you know I should maybe write some. Got to look at this kind of expression. If this would be fixed fixed or not the random factor. You just have a linear combination of as you have you just haven't ever chain you know how being swung over squared of an average grades like one over square of two then but now you have here random coefficients and then the trick is of course OK I take one from another set so that they're random but independent of what's going on here. And then I can just condition on it. And so as if it's not random and then you just have ever just. Because of the sample size splitting you split it in half so you would say OK I lose half of the sample that's why I do it so put in efforts again not to lose. Efficiency they're like they're perfect the one half suddenly in the variant would not be a good idea so you know. What we should. Be. It is and we're just hopeful that this will still make all of us just won't go all the way to say racism but we should go for it was. Really awful. You know for. You but you depends on the girl fissions of the linear form if that sparser is you're fine with otherwise. Always you're in trouble you're not finding. What we should. All look so far awful thank you thank you very much.