[00:00:10] >> OK. Thanks a lot for the introduction OK So this is my 4th talk today's talk at There's no optimization here so it's mostly a. Kind of spectral method but these some something that most people don't do in practice but let's see whether the anything interesting that. Spectrum as I have mentioned this multiple times during my last 3 talks when I talk about unconscious optimization because it has been heavier to use as an approach to get a rough estimate if you know the lower a structure so for example if you have a data matrix. [00:00:59] Suppose on expectation it has some kind of lower structure and then it is perturbed by some kind of from the. Oftentimes we would like to use the so-called spectral method to get some reasonable estimate one way is to basically compute the icon the composition of the data matrix and then sort of reveal some kind of. [00:01:22] Structure and then based on that suite him probably significantly Dinoire all recover some key information of. OK so I'm Today I'm only full going to focus on one special thing. Based on the composition of course you can also do as V.D.'s something there but I would like to say something more interesting if I only focus on in the composition OK So this is the kind of method based on the composition of the datum which is. [00:01:55] Usually if you want to do I go in the composition. Usually we are assuming that the data matrix. So suppose and it's image. But what I'm going to talk about today is actually what happens if the data matrix we have actually it's none symmetric and actually for 2 different reasons one reason this somewhat surprisingly few do it this way there are some Again the Compared to the case when you just run this. [00:02:27] The other reason for some of the applications you'd really have to perform I'm going to composition we respect to none symmetric matrix I'm going to use 2 recent stories to illustrate each of the points. Which will be basically the main contents of this talk OK So the 1st part is join the work with who graduated from picking University No it's a huge these 2 than the Stanford stats also with my college and so this is basically about why we would even think about performing I going to composition if we have an asymmetric make 6 OK. [00:03:07] So this is the model. Extremely simple model so I have the right one matrix. Star who are supposed to be you know lambda star it's my value you star. I go back to cases and run one case is M. by N.. I'm going to corrupted by some noise OK So this is my noise Matrix what I observe you know superposition of the ground truth plus some Some put the beige in the noise matrix. [00:03:42] The only thing I like to say so and my goal is to talk things I would like to estimate the star and also I want to estimate this is a vector So this is a very classical problem. No What's a special thing here a special thing considering this is the No I'm going to assume that H.. [00:04:02] Is matrices John the raided each entry is generated independently OK So this guy and this guy are not the same fake so they they correspond to completely different measurements OK so I have since this guy is a symmetric my THEY them it is this asymmetry. In general and so I want to say something for this particular set I know you might wonder why I even care a symmetrical noise matrix sometimes you my you happen to observe 2 samples for each of the entries and this is one thing that you can do maybe you can arrange them into an asymmetry may choose so that captures all of the observations you have a case of this is so model. [00:04:52] Now if you have to. Very simple model what are you going to do in terms of this eigenvalue an eigenvector estimation. The 1st natural thing that comes into my is maybe OK since this is a symmetric matrix let's try to run this V.D.. Saw a single about of the composition let's use the leading single value of M. to estimate the star and use leading for example left single of. [00:05:24] To estimate you start you can also use the leading right and I am saying go back to it doesn't matter that much because this is a very very natural thing that comes into most people's money OK. But I'm going to focus on the completely different thing so I'm not going to compute singular value but I'm going to do eigenvalues I'm going to perform I get the composition and then try to see whether there is anything interesting there if I use this left much less popular approach why this is less popular because people usually think they're used to composition it becomes less numerically stable if you are operating of Pong asymmetric ages. [00:06:06] So. You. Get it. That's a great point and as it turns out that that approach performs roughly similar to the single about the composition the pro-choice I can probably explain later but the key thing is that I don't want to simply tries it if I don't see me tries it you could 1st give us some get action which I'm going to illustrate. [00:06:33] For asymmetry matrices numerically. It's much more stable compared to I going to composition this people usually prefer this in part because. Now there is a probably a folklore theorem is not really a fear of many people have this common believe maybe as V.D.I. going the composition do not differ too much so maybe. [00:07:01] He even though this guy. Is much better maybe the performance is roughly the same so if this is the truth if this is if this is true then why the way you don't want to think about it in the cause. So shall we always prefer speedy over I going to come to the show so this is my message. [00:07:25] OK so you order to answer this question let me start with some experiment. So let me start with the extremely simple experiment of the say age is generated as an idea go see a matrix at each entry of H. is ID calcium with the same variance OK and then let me put form so this is the leading single of value of M. and let's let's compare the accuracy OK so if you look at the accuracy of single of O. the composition this is my code when I change the dimension of the problem. [00:08:03] But if you compute the I don't pick a value here I should so the leading eigenvalues if you compare it with the ground troops the performance plotting in this. You can see that seems that it's a large gap here and in particular if the number the dimensions of the problem increases Actually it seems that you get the logic in the tree. [00:08:28] OK So this is somewhat counter-intuitive factually So let me put another curfew here so I'm going to risk this guy by 2.5 over square root offend you roughly see that this roughly matches the brute force so. It seems that maybe it's not precise daemon but it seems that this value seems to be performing much better than the single value in eigenvalue the estimation. [00:08:58] OK So this is an empirical same and this is definitely not a true statement but it's empirical observation. I know you might say OK maybe this is because the gas in the teeth thing maybe is because you have. Some variance whatever let's try to look at another experiment which which means that you know jobs and your noise matrices no longer I do you thing actually is complete so the location dependent so I'm looking at the cone matrix completion problem but I can also. [00:09:33] View it as a special case of the model that I have OK So this is my Groucho's Matrix what I observe is some partial entries of star so after some proper rescaling. And there I get. X. Now you do the same thing you ran the experiments if you just from a computer leading single of value this gives you this red line if you compute the leading eigenvalue disk if you. [00:10:03] Do this brutally. I say can. You. OK So for this particular problem I can show that it's real bag so this would rank one case I can show days Ruba and the reason is very simple because usually the eigenvalue arise in conjugate pairs since I'm rank one if I have one guy. [00:10:35] But they both of them dominate all the other ones so they have to coincide. So just for the special case it turns out. In Jo Not in journo you my. You. Use. In terms of the magnet you so so you can and then you can project it to real bodies. [00:11:00] It's a great point. So OK so. OK Now the question is why. Are you going to composition work much better than at least I'm not claiming it's always better than video approach because at least for the idea. Is a way to correct the BIOS whatever to make S.V. deal I offer to move in subsets but in general I would like to say if you compare the needle I'm going to competition with the lies some get that it's could be useful for statistical purpose so this is a problem set up. [00:11:37] Am is my matrix OK So this is just remind you of by my model let's assume the truth. Values one for simplicity OK now this is my annoys me I'm making some assumptions each of the entries is generated independently 0 mean the variances OP abounded by some think they might not have the same identical variance across all entries but these on the common OP a balance. [00:12:07] And there are some Magni to balance their armies either deterministic let's say with high probabilities opera bounded by some B. It can be a very very large one. OK so. Now I'm putting another assumption which is common in the in the majors completion set up which is coherence one basically saying that each of the entries of you a star is not going to take too much energy so. [00:12:39] The energies of A will spread out across all the entries so this is this is the model. Now let's look at what classical linear algebra results to us OK so if you look at Costco I mean linear algebra. Results. If you are looking for some estimation about the vision for the single of the leading single value means you know OK so you have some Wiles inequality sort of tell you that the upper bound by you know the size of the noise matrix is the classical result if you are not familiar with the symmetry case that's a very similar one similar to while in quality that roughly gives you the same thing for eigenvalues. [00:13:31] OK So if this is about 10 then Eve Now since my may choose is generated in the. Tendency. And then I can use some kind of nature's Bernstein quality to get up about. All of these classical results these are roughly correct up to some luck factor this this this fault actually it's a very good estimate of the normal for age up to some log log factor so this is not bad. [00:14:01] This will be the these will be the balance that I have. Now it turns out that. Actually this the 1st balance is reasonably tied up to some factor if the size of the. H. matrices sufficiently lot OK if it's very small and then this my you can get some better by in the case actually this is almost what I'm although it's. [00:14:29] So that for the 2nd one even when size if ages for a large. Can be significantly improved and this is the men thing that I like to illustrate. So this is ALL result result sets that with high probability the leading eigenvalue of of these data matrix if you look. [00:14:55] It looks like this. So if you compare with this guy I'm adding the extra factor of square root of me over and so this is an extra factor compared to the classical linear algebra results. And these are dimensions so you from you is very small and this somehow says that you know you get a huge get. [00:15:20] So in comparison to our S. video results it seems that we might have even half the benefit to a square root of it. So this this is something that's at ease when I 1st saw you that surely is quite on C. expected to me. OK OK So and we can do many more things. [00:15:47] In addition to eigenvalue put the basin we can get for example entry wise eigenvector for the basin you can say that for each of the entry it is close to the ground truth in this order and disorder is almost like one that will square root of N. times to know about so this is supposed to be something that's almost typed. [00:16:11] OK So this is the if you just look at the classic Cobalt. This is something that you have. Significant see improve upon the classical balance by saying that OK we can. Get an infinity kind of felt rather than just. What this is saying is that entry wise eigenvector put the can be very well control so the arrow is going to be spread across all of the entries the arrows are not going to be localized rather they're going to be fairly spread out across all of the engines. [00:16:54] OK So but for this case actually I go in the composition of the do not really see a difference actually for either this video I'm going to come dish and you know that. I actually felt that this is the result actually that sense all to be useful for some inferential purpose OK and we can do much more for that hopefully we are not just able to control the entryways eigenvalue actually. [00:17:23] What we can do is for any fixed erection A If you project it onto this fixed erection and then you are able to control. The size of the put a base. Along that direction and so we have a belter looks like this guy. So this is also something for most of the A Actually this is also optimal thing up to some blocks. [00:17:51] OK. So in other words approach the basis of an arbitrary fixed linear form of the leading vector can be well controlled and our recent work actually go one step further that allows us to make inference basically build confidence intervals for a transport. But this is ongoing work I'm not going to say too much about that but basically this is a starting point that allows us to say a lot more compared to classical composition results. [00:18:25] OK So let me try to explain a little bit of the intuition about why the composition is expected to be performed better compared to as OK. So 1st of all for either the single. Or I going to come position these some kind of you know light some kind of Taylor expansion happening there actually here is not very accurate because I'm leaving out some of the costumes here but the constants are you know not going to affect this result by that much so it's safe to remove all the coefficients you know let's just pretend that this is the right Taylor expansion. [00:19:07] OK So for them that can either be you know like eigenvalue or single A value is something that you have something similar. But this is my result this is my take. So this basically US where saying that this guy characterizes the area for my case. OK So now let's try to develop some intuition and to do this let me look start by looking at the 2nd order term so that this is the most important term that shows the difference OK let's look at this term. [00:19:42] If H.. Matrix is coming back to your question maybe I can for similar tries a matrix in that case age becomes symmetric matrix. Now let's see what happens OK for ages symmetric Now if I look at this guy. OK It's very simple algebra shows that this guy is equal to expectation of some square OK for this case I can do some very simple calculation to show that actually is equal to end times plus. [00:20:15] The thing is the interesting thing happens when H. is a sin metric. If 8. Ages asymmetric if you look at this guy you can no longer be returned to normal square kind of things. You are probably better is probably better for us to ride in this way so in the product of 2 different things H. and H. transpose in my under my assumptions are dramatically different which means that this vector in this vector actually has very low correlation. [00:20:51] And if this is the case if you computer experts back to value of this guy it's equal to a signal Square which means that this is much much smaller than. The symmetric case I think this is my understanding the ball the problem so at least for the even older term there's much less of a bias there and if since you have much less of a bias there and us of your expected to get something much stronger. [00:21:25] All right OK so this my intuition for that now you might ask that So what happens if your. Even your major is not symmetric you say anything interesting though you can say well yes in some sense yes you have knowledge even though if you are asymmetric which is but if you have 2 samples per each entry. [00:21:51] And then you can do the following OK so suppose you observed 2 independent copies of them if I asked you to estimate them the star used star or you can do this maybe just real ranch them in this asymmetric way OK so that now all this become the metric matrix. [00:22:13] And then you can. Use all results to say a lot of things so basically all the things I have said before continue to hold for this case so you can generalize it. Of course this is wrong one case you can also move forward to some rank our case where you can also want to ask you know how to estimate the eigenvalues for those cases have for this case we have some partial results saying there for each of. [00:22:48] Each of the leading firsts are leading that I can value so if the data matrix we can find the ground truth. Values such that they are sufficiently close OK. One step further if you also know that the ground values separated by a little bit and then we can actually get the one to one mapping So that's also ongoing results so I'm not going to talk too much about this but there is some generation there show rise ation there. [00:23:19] Which allows us to also perform you know inference compute confidence interval for the job the right place OK So in general if you have a run card case. Then the composition is supposed to be a factor of this guy times. Better than the approach so if it's not that large you still get some good automatically. [00:23:48] And this might be improbable I don't know but maybe these a chance that we came to to remove one square root of fact. OK so this in summary if this is a To summarize the 1st part of my talk I going to come position could be very very useful when dealing with none symmetric data matrices again I would like to highlight that for some special cases like I the galaxy annoys you can also run with proper correction in order to get roughly the same result probably the most interesting Abiud in the composition so you do not really need to know any kind of these noise statistics so it's OK for the noise to be you know. [00:24:37] Stick basically means that the noise has a location varying variance they might not have the same distribution it's OK so you do not really need to use any kind of sophisticated correction in order to make it work. The future direction will be to get the better balance. For the vision this actually is on going to work and I should not say beyond I.D.S. independent voice OK So this is the 1st part of the talk. [00:25:09] OK so. And the 1st part of the talk is basically to illustrate that there's some benefits. Of using I going to composition even though as a possible approach the 2nd part of the talk is more about all the case when we are only able to use I going to composition for. [00:25:32] Symmetry wages and as it turns out there are still some interesting things happening there and this is this is specific to a. Very concrete application called top ranking this is joint work in my student home are. My students roommates and Wong and my my colleagues. All right OK So ranking so probably everyone can imagine well I'm going to talk about web search you want to identify the most relevant entries. [00:26:08] Among all the links you have in recommendations system you might want to recommend to the use or the most relevant movies or books missions you want to find the best candidates you know basically ranking It's a Taurus that rises in why it's ranch of different context the social context or the machine learning data science what it. [00:26:33] Sometimes ranking has to be for for space on some kind of peer wise comparisons for example in this this is a plot I get online basically they are trying to rank find the top tennis players and what they are doing is that as each pair of tennis players to play again and then they feel better though and then continue to do. [00:26:59] This peer wise comparison and then try to do some prediction based on partial pairwise can predict comparison the all the to figure who is the top tennis player. OK So this is a classical problem called ranking from a wise comparisons. So these a pair of very classical parametric model people used to model this problem actually I think for me days back to 5 decades ago in the statistics literature actually so what they are trying to. [00:27:36] Assume is the following model suppose I have N. items to wreck. Imagine I have M. movies that I like to give a preference or that each of the item I'm going to assign a score for preference score to decide so I have double one with 2 to do everywhere so each of the item is associated the way it's one school. [00:28:02] And basically you know your rank is determined by your score you have a higher score it means that you rank higher so it's a very natural thing. Now these are more the core Bradley Terry Lewis model which basically it's a lot of just stick to a question kind of. [00:28:21] Model So basically what this is saying that every time if I have 2 guys I N J. The probability that J. J. beats item I it's deter mean by the parameters that you. Associate it with these 2 items in particular is determined by the star over plus. OK So this is the classical model which you know it's also called law just to you know other possible model steer which is similar to this guy but we'd probably a different you know. [00:28:59] Link function. So a typical ranking procedure precedes as follows so. Since I'm assuming this parametric model a very natural thing it's a maybe let's try to 1st estimate the scores OK And then after we figure out what the scores estimates are we can basically do the ranking in accordance with the score estimates if I get a higher estimate of the schools I say disguise is rank higher something at least a very natural approach which actually is using many many different contexts. [00:29:41] My goal now is to try to say how to identify the top K. items on the minimal sample size because this is my question How am I going to use a minimal sample size in order to get the best possible ranking results. OK so now I want to have a little bit of the model of all my comparison models for which pairs I'm going to get. [00:30:07] A comparison so in this talk I'm focusing on a random graph I'm assuming that I have a random of those 3 in the graph G.M.P. and so. Each pairs of I terms of. Connecting them with probability independently with probability P. OK And as long as there is an edge connecting them I'm going to get peer wise comparisons there OK Each of the comparison is generated independently according to this little small so this is my model I have comparisons between each Pierce show up in the Negro and I tried to do a ranking of base on the smart. [00:30:56] And of course since I'm getting independent comparisons maybe just sufficient statistics they're not losing any information I can just average them and distance all Topi my selfishness to 6. So you so each i and j. That has an edge on this of those I can have this much. [00:31:22] So what does it say about this OK so. So if your goal is really to identify the top. Rank items. So a few years ago actually. My collaborator then I figured out how to stage a method for doing this which is a bit complicated so I don't want to highlight that algorithm but sort of by doing the right thing theoretically. [00:31:50] Forget about this algorithm the common approach is there one the spectrum F. the one you're just doing maximal like you who estimation. Have some theoretical guarantees there for example I this metho and no easy people are able to figure out what are the mean square error in estimating the score OK so this they will but these Nothing known Balt the ranking accuracy which actually is something that I've read about ranking Taus should eventually care about the ranking actually so you probably don't care about the school estimates that much. [00:32:28] So I like to argue that the mean square root which is basically a kind of like an L. to air it should actually be view as a kind of met Tom a trick for this ranking to us it's an all out my final goal it's just into media step for me to helm to help me figure out the ranking act. [00:32:49] Ranking results. But you can imagine that the L. 2 area itself is not going to tell you tell us too much about the ranking results for example now if I plot the ground troops aim my estimates plot in all range. If I include some there and this is the top 3 items I have. [00:33:14] But if you look at another configuration which I applaud Team Green which have the same. Arrow compared to the 1st one plot but if you put the top 3 items you will almost complete different ranking results so somehow this is saying that even you have identical to loss if both of the even both of them are small. [00:33:40] They do not really guarantee a high ranking. But that is really what I want it's. OK So this is a message they have the same mail to last but the outputs completely different ranking results so the key thing to say is probably what we really want is not to loss what we actually want is some kind of entry so some kind of infinity kind of. [00:34:12] OK Now the question is coming back to the spectral method part is a spectral method alone actually optimal for top the ranking. Mentioned there we had an algorithm combining spectral method Emily in the complicated way in order to make it work the question itself spectrum which is sort of like. [00:34:32] You know like something Page Rank kind of method that's really popular in part case. This method alone is already optimal although we really need to have some refinement approaches to make you ill. And there's some part so before us. If you have an extremely dense like if you observe a pair between each pair of the items and then you can show that spectral method is actually optimal in terms of talk ranking what we would like to argue is say you do not need this to happen actually for most or all of the random graphs this can happen. [00:35:12] So we try to give them the affirmative answer to this question by saying there spectrum F. 0 can really be optimal for this problem. So what is a spectral method this is something that I have not yet. Explain for this case. OK so I have some pair wise comparison results OK so I have set off why I.J. char comparison results between item in item J. Well I'm going to construct a probability transition matrix. [00:35:45] That looks like this OK So each entry of a I'm a kid proportional to why I J. J belongs to my comparison graph Otherwise I put the 0 there OK I put the proportional here because since this is going to supposed to be a probability of transition I also need to make sure it's double stochastics so. [00:36:11] So. These 2 so you need to satisfy some of the probability constraints there because of this I want to still choose the diagonal entries of Dispy may choose to satisfy those probability constraints OK but up to some calls didn't we make we can find a way easily find a way to make sure P I G.'s proportional to watch because this is the. [00:36:36] Half. Since this is probably transition matrix. It's not going to be a symmetric of course. In general this is a highly highly asymmetric matrix but we still want to say something about this. Now what we are going to do is the following strategy we're going to return a score estimate as the leading left eigenvector of dismay G.'s. [00:37:08] So why is this supposed to be an interesting thing now let's look at the population level case suppose I have the Infinity samples at each edge and then my. This will be proportional to discuss this is my the model we have based on the. Model OK so now this my reminds you of something actually this is very similar to a probability transition mage's for a reversible Markov chain. [00:37:38] And if that is the case you can immediately through a little bit of calculation you can immediately shows that the stationary distribution associated with this reversible Markov chain is going to be given by something proportional to this vector This can be easily checked because these sort of lies satisfy some kind of veto balance conditions there so basically W I J. [00:38:06] J times P J I something or other and then you can sort of guarantee that this is this is a rise stationary distribution since this is expected to be the station distribution there in their leads in the population level this means that PI star is supposed to be the leading left eigenvector of of this P H S OK. [00:38:33] So all men results look stated the following so as long as the sample sampling rate. Larger than this guy and the spectrum are fair actually achieve optimal sample complexity for top ranking task so you're good with high probability you are able to return the top rank items. Sample Complicite What does a sample compiler looks like the sample complicity looks like this guy is N. times lock in divided by some separation metric square the separation metric is the difference between the case score and the K. plus one score something and properly normalized. [00:39:19] In comparison to prior resolves. Rizal require you know the sampling rate to be very large what we can say is actually we do not need that. We are able to make sure their sample size improves with SCORE separation all the way down to the information. And this is some numerical results showing that actually this is really the most of this actually is this one algorithm I didn't talk about which is kind of a spectrum. [00:39:55] For almost the same distance particular problem and both of them are able to achieve the best possible top ranking results for. More to. Do numerics I think is 5 I don't remember exactly things 53 or 5 at. OK so now I'm always going to do the entry Why is a little bit of technicality think OK so. [00:40:27] We want to understand ranking accuracy we want to understand the entry wise estimation accuracy for the score vector OK So ideally we want to show that my estimate of the score does not differ from the every single entry and hopefully the difference is smaller than the separation so that I can make short so if my estimate of the case. [00:40:55] Score vector of value different from. By a half times the separation this also true for K. plus once and then all of them put together we will not exceed the old OK and if this is the case we can make sure that we can perfectly separate them so this is that this is the idea now the question is how are we going to do this and to wise area we are going to use leave on our analysis so far read the mentioned before so if I would really want to estimate. [00:41:29] The per the base in the in the single entry this is something called Leave on the analysis all similar really is something called cavity math or in the in physics. What we are going to do is basically this is a matrix I have the I want to control the M's entry of this vector I'm basically going to flow away all of the observation that have something to do with OK this is my home a 6 if I want to estimate this and I'm going to remove everything I have something to do with descent. [00:42:02] So this is going to be my new matrix and then I'm going to perform the same OK. So why do I want to do this because if I throw this away this becomes the new estimate become statistically independent from everything here. OK so. So this is my my leave one out estimate which is supposed to be completely independent from every observation has something to do with it and because of this independence I'm able to control this almost like I have not seen anything there. [00:42:41] And then there's another thing which is basically saying that the ground the true estimate I have is not going to be too different from from from the estimate because of some new Continue to argument here so this is something I'm going to close the billet and you have some kind of independence you have some kind of continue to both of them put together you know that both of them are very small so when they put together they are supposed to be very also very. [00:43:13] Right so. I think. OK I'm going to skip the technical part and just mention there is some prior works there. Starting this parametric models a lot of dating back to. 50 or 60 years ago some non-parametric models for this generalizing this model to you know like. A general models without putting any of the parametric model there but still I think spectrum where for its It's one of the most popular with use in practice regards the use of the models. [00:43:53] Like. This is a very similar to the Page Rank thing which is actually used every day in. Analysis wise. Livan Now this is turns out to be very powerful in particular when analyzing. Optimization algorithms listing some of the references you pay so to just some rights. Spectral math or optimal in terms of the sample complexity when doing. [00:44:20] The top 2 ranking problems one thing that I didn't mention they say turns out to be also computationally optimal but true this is something that you can compute in the near time there's another hour with them I didn't mention which is regularizing roughly the same thing using roughly the same analysis techniques OK So this is a paper it's in that it's OK thank you very much. [00:45:01] Yeah. Yeah. Yeah yeah yeah yeah. Yes yes. They can obviously US Yeah yeah. Yeah. Yeah. Yeah. He might be possible I'm not entirely sure maybe so maybe so there's another line of work doing something similar to spazz P.C. but they. Basically I still have. 1st. Matrix Are you an estimate the using you know some kind of linear measurement of that but not this kind of measurement so what shall the only. [00:46:31] Thing they have told me Mar they have a paper saying OK I have separated into 2 stage I 1st try to estimate the support in the roughly correct way and then I'll focus on that part of the support and then I read do you know this I'm pretending things to be I don't know maybe we can do it you. [00:46:53] Know enjoy knowledge of yeah. Yeah. Yeah. Yeah. You mean this some decades or whatever. Or. Some desperate. Yeah yeah. Yeah both you know. Me. Well enough often. Yeah but definitely by the will needs to be trained so that case you. Really. Want to. Hear what you. Think so you know. [00:47:57] You. Want to go with. Mark you know. I see so yeah. I think so if you run M L You kind of thing I feel you can't spectrum if I'm not entirely sure most spectrum method might be so you're basically saying that you have higher resolution about this so yeah I think you can still run some kind of ammo you to do that and there would be more like Yeah you can you can yeah spectral math. [00:48:34] It's a bit difficult more difficult may incorporate this information by him yeah I think. This is. Thanks.