Go ahead and get started. Thank you. Everyone showed up today, so yeah, this is pretty much going to be a continuation of some of the work that we've been doing in the group. Specifically what up and working on S on data-driven estimation of inertia manifold dimension. In this case, it will be for chaotic come up our flow. And I'll also be showing some time evolution on the manifold as well. And how does this work? So I think I showed Last time forbid, that I'm just going to recap for some of us forgot. So we know that the problem that arises, that the system says that high dimensionality arises with increase in number, right? So as we increase the Reynolds number, we need more dimensions than we need a more discretized system to be able to correctly evolve the state. And when we talk in the case of low Reynolds cases, where solutions arise such as chaotic orbit equilibrium, relative periodic orbit. Usually you can find some global representation. For this case. Corporate. You could evolve this in a 2D manifolds for our relative chaotic or bad or what b3 and so on and so on. And in this case, essentially showing how the vorticity evolution looks like for a chaotic case that we're going to be considering. And essentially here, what you're seeing is this the heatmap of the vorticity off some positive and negative fluctuation. So you see that it goes into some hibernating state and then it bursts. And this is one of the cases that we're going to be considering. So the question that we ask is, can we use neural networks to find a lower dimensional representation dynamical models for this. The system that we're going to be considering is the Kolmogorov flow. So we have our Navier-Stokes equation and we're going to have this extra term, the forcing term in which this and pretty much controls the external forcing. And depending on the number, again, it's going to be how many wavelengths do you have in your domain? And in our case, this is going to be n equals two. But just a minimal case in which you start seeing chaotic behavior. We're going to be solving this equation over a periodic domain of 0 to two pi, 0 to two pi, and a 32 by 32 grid. So there's going to be a 1024 dimensional system. And we're going to evolve this for the vorticity evolution. Just to sort of show the different symmetries that are ice on the system. There's a translation in x, right? So we can take this vorticity snapshot w. We can shifted in x and this will also be a solution. You can also shifted and y and reflect that in x. And we also have our rotation two pi, right? And here I'm basically showing how these different symmetries look like. So at low Reynolds number, we see various solutions, right? And this can rage. For example, at 4.5, we see an equilibrium, so we see that it pretty much the evolution states there are 12.5 here we see a periodic orbit. At 13.5, we CRLF periodic orbit, right? So you see that this is translating and then negative exploration and it's also oscillating. And it's not until in our case we reach a Reynolds number of 14.4 or we start seeing this chaotic dynamics with hibernating and bursting events. Okay? So the motivation behind what I'm gonna be showing is that this pretty much arises from some previous research in our group. And which for the case of the Chrome auto key equation, which is shown here, it was shown that if you train what is call it the Hybrid neural network that I'm going to be talking about soon. Essentially, what you see is that the mean square error of the data, right? And the mean square error by decreasing the dimensions of this under complete autoencoder. What you see is that when you reach the dimension of seven, In which in this case, we're accounting for the face. So the dimension drops at seven. The essential to essentially see this drop and orders of magnitude. And if you then take this lower dimensional state and train neural networks such that you'll map this time. You see that you can even capture statistics here. So in this case, what is showing in the right side is statistic a joint PDF for UX, UX X. And this is the true Theta. And we see here that a dimension of seven, if we take the store dimension of state we above and in time. We map it back again to the full space. You essentially see that it looks really similar to the true data. This the same comparison, for example, with PCA, that in a dimension of seven, you basically see that it's not able to capture this giant PDF behavior. So bright sun then, then in this case we saw that it worked pretty well. We're pretty good for the Cremona of Jewish history equations. So we thought, okay, so what if now we take this method, we take this methodology and we now apply it to the Kolmogorov flow. So, right, so we're basically building, we did this, the system. So now let's go to 2D. What's going to happen? Are we going to see a job? Is this going to be good at predicting? So that's what I'm going to be showing in the next slides. Okay? So to sort of build on this, I'm going to talk about method than going up using. So I'm going to, the neural networks that I'm gonna be showing are going to be based on the Hybrid neural network. So essentially what the Hybrid neural network does is that it combines PCA with a non-linear neural network to find a lower dimensional representation. So essentially this model, It's always going to perform better than PCA. Because on top of PCA, well, so kind of detail, thank some nonlinearities with a non-linear autoencoder. Essentially the idea is that we have our state. In this case, we have a vorticity field. We're going to flatten it out. And we can perform PCA on the data. And we can feed it into a non-linear neural network. And then basically encoded taller dimension of state H. And then from this, we can take it back again to the full space. We can then further take this lower dimensional state age and build a discrete time map with another neural networks. So then we can take this and evolve it to some time t plus Tau. And then from this, we can take it and map it back again to the full space using the decoder section. Some carbons here. For the case that we are going to be showing, we are going to be removing the face before feeding into a neural network. So we're accounting for that translation symmetry. The data will be separated by tau equals five time units which show to perform really good in our system. And I'm subtracting the mean from the vorticity snapshots that we're going to be considering here. Okay? So I will be applying this methodology to two cases. So the first case that we're going to be considering, it's the relative periodic orbit, which we know how many dimensions this should give us. And then the other case that we're going to be considering is on the turbulent regime or the category team at Reynolds Thumper equals 14 for which we see bursting chaos. So, all right, So then let's start with this case. Okay? So essentially the dimension for this case, we should get that it's three or two bar and moving the face. And as I was showing, you basically see this translating the negative fracturation and these oscillations. And here I'm just showing how the kinetic energy evolution looks like with time. So then what happens if we train this hybrid no network and predicted time? Are we going to get, are we going to see a drop at T2 is going to be similar, right? To the KFC case that we were seeing previously. And essentially that's what happens for this gate. So when we use this hybrid neural network, we see that a dimension of two, it drops about five orders of magnitude. The mean square error for the case of the autoencoder. And this happens at a dimension of two because we are accounting for the face. This is in comparison with PCA, in which you to see that it's slightly decreasing. So yeah, this is pretty good and this let us know then that this is working. So then Something that we also are going to do, and this is going to be more interesting when we migrate to the 14.4 case, is that we wanted to see if we can now take an initial condition at HFT and then evolve it in time, right? So essentially we're training our time stepper that maps from time t to t plus Tau. And then we take an initial condition and we evolve and end time re-feeding it insulin neural network. And essentially in this case, what I'm showing is that I took an initial condition, evolved that in time. And in this case, I generated a joint PDF of the Input dissipation data. So essentially what happens is that a dimension of two, right? So this is in per participant, this is dissipation. And essentially what happens is that a dimension of two, you see that you correctly capture the statistics, right? Which makes sense because we know then that the dimension for this system is to ident showcases for one. But essentially at one. What you would see is that this model fails and it's right. So we see that the mean square error is high. And also this is not able to correctly evolve or map in time. And then the other thing that we're going to be showing is the joint pdf of the real and imaginary components of the k x, k y equals 0 one Fourier mode. So essentially what this is showing is that the model pretty much captures the trajectory evolution of ERPO. And this is going to be also really interested in the 14.4 case because this trajectory basically depends on what different shift perfect symmetry you have. So and we're going to be seeing that soon. And then the other thing that I wanted to show you was basically a movie, ask the true data, and then the data from the model of a dimension of two. And then we pretty much see that the kinetic energy also falls well on top of the true kinetic energy. So we're seeing that, right? So visually it looks okay, We're seeing that it's capturing correctly the statistics. And also kinetic energy. Evolution has shown to be good compared with the true data. Okay? So then moving on to the case too, we see bursting chaos, which is the one shown here. So here I say basically commented previously. Essentially the trajectory travels in the vicinity of some relative periodic orbits. And then you're going to see some intermittent bursting that is shown by the high kinetic energy events. So here you're basically seeing that it travels pretty organized in the vicinity of some relative proud of carpet. And then you're going to see that and thank birth. So that, that disorganized asap. You can basically see that it's characterized by peace increases and the kinetic energy. So then writes, then we thought, okay, so this case is interesting and let's migrate to the system and see what happens. Because this is where we start seeing this transition to turbulence in this chaotic state. Okay, So the story for the rental stamper of 14.4 becomes pretty interesting because in comparison with KFC and the Reynold's number of 13.5, you don't see any sharp drop at sub-domain. But what you do see is that you see that the trend of the mean square error versus dimension goes down. And then it seems to start plateauing after approximately a dimension of 15. And you pretty much can see these three regions where approximately from one to five, you see a sharp drop. And then you sort of see a medium drop followed by what looks like a plateau. So some initial thoughts that we have was, okay, so is the dimension for this high because this is pretty much where we see that staffs or can it be 13? We're essentially is where you start seeing this plateau. So right, saying this case, it wasn't clear. So we said, okay, so we have this models. Let's now predicts and time. Let's see what happens. What are we going to get? Is a gonna agree with the true data. So in conclusion here, so it, So what we did is essentially here, we did the same thing that I was showing for a rental summer of 13.5, where I take an initial condition and the lower dimensional state evolved at a time for the neural network that was trained for this case. So this, I'm showing here a joint PDF for the power input dissipation of the true data. And then here I'm showing a dimension of 345. And you pretty much see that add a dimension starting at a dimension of four. It pretty much starts agreeing with the true data. Even though you sort of see the sort of scattered data around here, it's pretty much taking the shape of the tree data. And when you get three-dimensional size, it looks even better. And if we now take. The difference between the PDFs from the predicted data and the true data, you essentially see that the mean square error starting at four after 456, you, you don't start seeing that much improvement. So we thought, okay, so maybe the dimension of this my, my than URL for the other thing that we did was calculate the joint pdf or the real and imaginary components of the 01 mode. And as how us, as I sort of hinted a couple of slides back, in this case, what you're seeing is the trajectory traveling into vicinity of the different rpoS that appear due to the symmetry of the system. So right, so basically here showing the true data. And then here, this is a dimension of 34 and then five. And what happens is that a dimension of five, we essentially see a smoother reconstruction near the vicinity, thus the periodic orbit, right? So we see here that the data is pretty chart here. And studying essentially add a dimension of five, the reconstruction scene murder. And that also is shown by the error us, the predicted PDF minus the true PDF. So right, setting out a dimension of five, it decreases and then you see no further improvement. So the other thing, you can go back, yeah, robot. I don't know why. No. One chord. This one. Yeah. I mean, maybe parliamentary. Real American Republic require come from. Yeah, so this so this comes from so he take thoroughly credit corporate and your shapes are selected. Then you're going to fall in a different plane of the real and imaginary parts. So essentially you pass for those imagery core value to a group element. Yeah, so we have two n minus one cyclic groups, so n is equal to 2. So then we can take a snapshot and then we can reflect that three times, shift reflected three times. And then the fourth time it's going to basically fall in the same place. Yeah. And then depending until a friend Yeah. Good swimmer today. Be two. It could return a product of two symmetries that work to think about it. Yeah, a group element could have only poor yeah. Okay. Thanks. Yeah. Could come from the geometry. Yep. Yep. And then yeah, so that's essentially what's happening. So your trajectory is essential. The next thing. And then following falling in the different like this in any of the other RPO, the unstable RPL in this case. And yeah, so what I'm showing here is that starting at 3, 4 and then at five. So you can basically capture go to my repeat loop rug and oh yeah, yes. So here, right, so here it just means that this is like lower probability, I guess, and falling in these regions, right? Because then the trajectory pretty much travels more. Yeah, yeah, so, yeah, send this PDF and clear. Now, like a high probability or of thing. Yeah. And then you have you can wait for maybe one of them, you can jump to any branding? That's correct. Yes. So you can you can essentially jump and any neighbor cell, it can be traveling. Claimed it can go here or here. Yeah. Down stable manifolds that go from one to the two other guy jump across diagonally. But yeah, So from what I understand, you have like a symmetric unstable manifold, right? So for the symmetries of this different cases, that can then connect you to any of the other three are connecting your pure variable rover. You're no longer quote apparently. Yeah, so that's from here. Yeah, that's what I thought. But essentially it can actually also like just like loop around here and fall in this one. So it doesn't necessarily have to connect with this one or this one. It can also just loop around and fall. Here. We go. Oh, yeah. Yeah. Yeah. Yeah. Thank you. Yeah. Okay. So then I guess the other thing that I wanted to show, what's essentially how these movies look like. So this is the true data and add a dimension of two, which I didn't show it performs pretty poor, so it's not able to correctly capture these hibernating and bursting events. And then starting at three, you essentially start seeing that behavior and 4 and 5 as well. And here what I'm showing is that kinetic energy evolution versus time bluest predicted and then green is the true end. You see that you can observe those and to meet them bursting events. And here I'm just showing just the small window. You can see that the short-time prediction. And in this case, you can see that it pretty much can travel for around, for around 90 time units or so, which is the Lyapunov time of the system. So even a chart types, we can get pretty much good prediction for this. So yes, so that's, I guess, I'm sorry, emitted by a group in Europe who don't agree up to 02:00 AM. I supposed to be worried about for your app with good crypto? Yeah, so I guess here what we call awareness because it pretty much predicts for a bodily up enough time. But yeah, I guess after a while, just the trajectories are going to run off, but the predicted one is still, are they having those like bursting events and hibernating? But yet it's not going to agree after guy, Padre to show that, for that pretty much there we call that a when Jeff science. That's pretty much what I have for today. I have the patron, approximately a 30 minute talk, but I guess some cushions feature works. So what I want, what I want you to take from this is that we essentially showed that we can represent the chaotic dynamics and the Kolmogorov flow and a low number of dimensions. And when I say low, in this case five versus, for example, the full DNS, which was 1024 bytes or 32 by 32, great on the neural networks in combination with PCA, shows good performance for our system by reconstructing the data. And of course, capturing statistics as we saw in the different joint PDFs that I showed. And the error. In this case, it can pretty much guide us in estimating the dimensions for this case, right? So as we saw in the hybrid neural network case, we see that this error can pretty much tell us where are our blood area? Can the dimension for the system be lying? And then by modeling the dynamics, this can pretty much wrap up then the story. And then some things we've been thinking, but you haven't thought about that much. But can we, for example, use data from experimental model to combine with controls, right? So knowing now the dimensions for the systems that are lower dimension, can we use this to essentially control the systems? And also can we then apply this for more complex flows, right? Higher dimensional and different dynamics? Yeah, that's pretty much I have to feed it. Yeah. I see any questions. Thank you so much. We don't have the key as we have lots of soldier from the two-dimensional Kolmogorov. People with a time to live freely without wondering what Roman or Mike, mike, where. Total comfort discussion. You can share. What could be representative here from from that. Well, forget the motorcycle and it's my computer. Let's try its best. So I have a question on maybe your methodology using Reynolds numbers to observe. So you looked at who's 13 points, I mean 40 something where your RPO versus university behaviors where you have this transition in your dimensionality graph where the sharp decrease and then poncho for the RPO where you have like three different stages in decrease for the bursting. I was wondering if you tested me Reynolds numbers in-between there to see what that transition looks like. And you could use. Bifurcation theory. And it's just talked about if this is the smooth transition or a sharp transition between these two bones, the dimensionality, is that something they all looked into it? Oh, yeah. Yes. So essentially, I guess so what we did so we may just OK. So the case that we consider, which is port 14.4, is essentially just the case where you go from RPL to just chaotic data. So we actually try to see and we varied Reynolds number pretty systematically to see if she could get more simpler case this or what could happen. Yeah, I guess I should see that that big chart job between just how many are a periodic orbit to then having the case of 14 push far where you just see that weekly turbulent dynamics and yeah, I guess the conclusion that goes for the two cases that we consider it because yeah. And like the middle like that's just like what happens. You see rpoS and then weekly turbulent. But yeah. Yeah, that's pretty much what we did. Did you go any higher and Reynolds number to see if you know that the decay and what does it mean square if that gets flatter over time, could you could you characterize that? Yeah. Yeah. So so I'm not showing because we did consider some higher Reynolds number and yeah, like it I guess it becomes like, like less apparent. And I guess here pretty much so saw that's charged dropped because you have, I guess this level of organization, right? Because he had the trajectory, essentially traveling a lot in these highly organized states, which are that their friends are appeals. But yeah, I guess I've Reynolds number or high Reynolds number. We then see that much sharp drops and as I say right, you get more chaotic and like like those cases. Yeah. Yeah. Who advocate sharing representation. Thanks, Carlos. Yeah, Thank you for that. Really. Another great presentation. I have a question. This is a little bit speculative. But thinking about the, that kind of four-fold symmetry structure. Do you think there is a connection between sort of the dimensionality in that symmetry? Because it's sort of seen. So this is some sort of numerology I just kind of came up with my head. So you've got these four regions. So that's kind of gives you a dimension four for that first lower before the plateau region. And then you'd have the second drop off at about 15, which if we call that 14. If you count up all the connections between those four lobes, including sort of self interactions. And almost seems like the dimension is going with the sort of total number of possible unstable periodic orbits if you follow that logic. Is that clear? Yeah. Yeah, so I I guess I has thought about that, but yeah, I don't have a clear answer. Yes. I'm I'm actually like try to like think about it. It might be the case. But I yeah, I don't know if that's what's going to happen, but yeah, I've thought about that like you're saying, right? Liked it to the different symmetries. You can pretty much, I guess, can sort of have an idea and different like on staple stretches that you're going to have in the system and yet might connect with that. Funny, I don't have an answer for that, but that's yeah, that's a great question. I've thought about that. And I don't know, pregnant wants to weigh in because I was from macho, your impolite to go by. Memory barrier. Whenever you have a symmetry, you're supposed to use it. Now whenever you have a discrete symmetry and you use your data before you process the data, not after the verb. It means that the group, It's called diva anchors the whammy three reflection to weigh. It could be D4, which means you reflect around the diagonals, but ethanol, it's not the case. So that means that the fundamental domain is one-quarter of what you're showing is what you do is that whenever you see a dot in the wrong quarter, you apply symmetry operation to put it into the fundamental domain. Now, that means that it naively does that mean to get four times as many point to do your data analysis in fundamental domain. It turns out to gain much more than that. And that's described in great length and Carroll's book and then Care Of course, online course. So in depth should be done before. And it's all done both on numerical data and experimental data. It doesn't matter where the data come from. If you have equations of motions, you might be able to rewrite the equations of motion. Today only act on the domain and that's the great saving, etc. But that's not the crucial thing. But basically, wherever you are generating without thinking either numerically or on, on this father of the child. Enough. One tile repeated four times by two reflections. And dedicated as you much further PDF, that gives you a much better fit. They've created so much better estimate. Now the dimension of the problem doesn't change. So if the problem lived in five-dimensional, nearly full state, it'll also live in five dimension. In the fundamental tar is no reduction of the much an advocate dimension gets reduced to only when you have continuous feed method that you have two Gaussian because each continuous symmetry care if it's one dimension in state space, which is continuous coordinate on which nothing happened because you're just moving your solution around. And then in solution, they have to become periodic solutions after. Yeah. So it's a win, win, win, win for plant in your case. This is, I expect you not to do it because I've never met a graduate student actually paths. What do you say? Great. If you did it. Anyhow, it's just the way people learn quantum mechanics or quantum mechanics in such a way that is not know how to do it any other way other than using the thing that I will talk, quantum mechanics never occurs to you that you would be looking at Pi and 0 for helium without going through the math, they can enter thematic. Patients are feeling bad for plumbers and electricians. This is very hard and in our control people, but it's really safe, loved. And they had lots of examples and Killingsworth and actually Danio stem, but you get four digits of accuracy. If you use a metric, get 11 bits of accuracy. Now, you don't get to dictate club accuracy in this problem. They're happier is present? Or do you just say greatly? And I wrote the book I've taught the course has talked another cause them to the, the record. And then I eventually proponents as though it was painfully, but that time hasn't happened. If it's really only effort is the thinking efforts you used to change if they just use it more efficiently. According to me, now, I'm lazy and I don't pan out with myself. I'll just pick them, right? Right. I might be wrong, but because the only people who actually do there was no everyone. But I think it out. So since you brought this up, I have a question and so one of the reasons we don't often our Crimea or wherever, though, we would be when we add the discrete symmetries, we often struggle to do the time-stepping with them. And that's why we, like I've omitted during the discrete symmetries of current distribution, ski and by Carlos has been like for the autoencoder portion, it's fine. There's no issue at all. And adding them and it makes them better. But that's always what we run into. And we It's time testing. Time step has no relation to the famous three hypersurface. So I'd be worrying if I'm integrating in fundamental domain. And I have to find where the reflection line is then changed my timestep to and back. You don't have to do that at all. Only thing you do is once you find yourself on the other five OKR reflection bound up in the case of reflection symmetries differently, put that point back into a national faith by reflection. Then you continue integrating AI chimney two or three previous path to do it. You can reflect all of them. If you're inertia are no path that integral around or whatever you're doing. But so, so you can run the thing in the full space and you can collect the data in a quadrant and it doesn't cost you anything. You don't have to adjust. Time that you don't have to check where your boundary or not. You can have huge timestep 3 Beta allows that sometimes when you are very regular trajectory moving basically on a circle or something, you might have homunculus steps and good, Actually, no, that's perfectly fine. All that happens is any data points on the trajectory That's outside. And that's very easy to check in your Catholic because that gets changed sign. In either case. Karloff explained. But in general, it's very easy to check on which side of the hyperplane. I just got the hyperplane back to our date pending changes sign. So it's really easy to check that. That's not a problem. It's actually achieved no matter what your integrals. So I'm saying if you run anywhere you are on, you have your integrators that you inherited or somebody spent ten years developing, you don't want to mess with that called that happens is very much in fluid dynamics. People have invested a lot. So you post-process the data, meaning the page and I, whatever it with Edit. And you can restart it. Once you're in fundamental domain. Or you can just keep running all over the place and reflecting all points to the fundamentals of math and reflection. It's very easy. You change the sign of all. In our course site in your Fourier represent pension. So deflections are very simple representations both in the configuration space and in spectral space. But they're not having to do with collecting data and to get much then Thursday kind of pick on that fundamental domain. But also it has an iPhone dwell, but it has all kinds of other unexpected effect other than just multiplying theta by four, confidence it make take that smooth 30 more analytic doing it. And so when you take a Fourier pencil booth, they'll take a, you get a better convergence in, into 20 patients with spectral representations. So the Dell kinds of unexpected things you get from it. And if you're really smart, you can go through the use of overrepresentation before we can start a blog tab and why I think your paper, because your data is linearly accumulate what you're getting if you're getting data to test the trajectory. And even though you're integrated nonlinear equations, that foot you're cool material. Finally, how it is that if you start the bill of compassion, a density function and you let it evolve in depth, call that on program evaluation or Koopman operator or whatever. Female a thing. What happens to that stuff? Instead? It January. You saw it in the pictures that show that even if you run for a short time, it looks very ugly and nothing because it's non-linear and nothing has famously, if you accumulate the day thank you. Kept collate the scope and divide to get colleague Scott, because this is a linear representation of your data, you're just computing density of time, average time accumulated that Tibet is eigenfunction of reflection operator. And that's why quantum mechanics and other books have some kind of tables that tell you how you reduce, like say, dihedral group order for the same element. That would happen if we had also diagonal reflections in this particular example happens to us when you look at hypercubic lattice in problems like that. So then what happens is that the Fourier is beginning of diagonalization. Then there's some block, the call irreducible representations and total data. It's much smaller than a, the stuff you started with. Because in a dual reciprocal at this stage, it's very compressed to a few small cloth which irreducible representations. And whenever you have exact thing of evolution lucky here. And it's linear operator, then whatever operator you have, which is the integral. So in together It's a nasty nonlinear evolution. Stop. When you look at the evolution of the densities that computes this, whatever that offer iterate and therefore block diagonalize if your reciprocal lattice, you have spectral representation of your data. Thanks. All in condensed matter and quantum mechanics books, there did not teach you anything. Because you've been saying, because you thank you too. Condensed matter experiment where they actually see its reciprocal lattice or they show you a little dot. The block peak is actually what experiment she doesn't see the crystal, crystal is don't act on the top right there in the, actually in our power peaks of the black spectrum. So they actually don't do it any other way. Wouldn't have. Metaphysics. Is archaeal thing representations of space cool, crisp. Looking for that. Now, we're doing, this is third millennium. We're doing problems that could not be done in 1990 because they didn't have computational power. That conceptually it's the same thing again as you can. Okay? Great deal of those methods, both continuous and discrete, indiscreet a very simple because what they do is you have a space configurations, tastes like the pictures shows us. Now you say, well that's a pizza and therefore a lot. So I'll pay cannot shall cut it in four pieces. And operation between who gets sloughed piece is just a permutation of length of it. It's not. They've now here's the thing. One piece of pizza contains all the information and your maximize years in December, then irreducible representation to the discrete groups. And it's not very hard once you do it. Another thing that as a graduate thesis, it hasn't and it's trivial afterwards. And you don't even write it up in your faces because you have to do it. Yeah. I sense Thanks. Yeah. And our answer of the pieces one, that's not like I haven't done that much harder. I wonder if I could jump in here. Rolling off with Alex questions. So do I agree? I mean, it is simple to apply these symmetry operations. The reflects or shifts or whatever you want, right? It's, it's almost trivial to like you said. But I think the problem is that they're trying to do the time integration in this reduced space after doing their non-linear order transformation, though the cemetery and that nonlinear coordinate transformation, they don't commute with each other. And so I think that now the summit, the cemeteries in that reduce space are no longer trivial. Yeah, I mean that that's the next step into discussion, which is, as you say, much materials to think about. You have to think of all this problem, space, your temporality. You should not think of space as being one thing at a time, the other way. And grandmother, she was ****. That's very unnatural because with cloth, facial temporal profile of the thing. But then you can understand that shift, shift and reflect, meaning shifts pill time and reflect and space. They become very nice symmetry in space-time. And now, reducing the equations themselves to the fundamental domain. As it say, could be awkward and unnatural because you have to impose various boundary conditions at the edge, but you don't have in-depth case the problem objects moving through space in a physical integrator doesn't know it's cloth pin at play line or anything like that. I'm just saying that's it. That's too painful. Just don't do it. Just post-process the data. Value integrator is running around doing a very accurate time intervention. Because you have developed to go the integrator. You reply, you apply reflections to their trajectory, all the obtained by integration. You don't have integrator to stop and reverse wants me to war. I think nothing is lost. And that is, for example, Kellogg's book in, I would like people always to explain low rent as vanco organize the thing with only one year and two years because Lorentz attractor. As a discrete thing that they buy the patient by part. And then the quotient. In that case, that's naive way of you double the angle with which you are moving and you get a representation of the subtractor without the one which has only one year. And then you understand everything about warmth much better because you get much better than happy, because that object at invariant subspace is the t-axis. And delete that, go around it or you don't. And that integer, it's fine, it works. But I think I think necessarily work. I think part of the difficulty and the presentations that we've been giving over the last three weeks is we have these neural networks that are doing this dimensional reduction. They're doing some kind of nonlinear coordinate change. And you don't necessarily have a lot of structure that we impose on these coordinate changes. And so you can imagine that in general, they will break any symmetry, right? If you, if you take Laurent's Andy, you just apply some diffeomorphism, then, then you break that symmetry that you just mentioned. And so something that I think would be useful to think about and I don't know how to do this, but I think it'll be useful is how do you maintain symmetries that are present in the solstice then when you do this sort of dimensional reduction to these coordinate changes, apparently your people haven't done this myself, so I'm ignoring and then just shift into the talks that you know how to do translation of that, that part works. But that's how they got. That follows it helps you along because lots of our faith and our product for translation symmetry built in. Now. I, I agree with you. I mean, it should be. And maybe in Roman guys have all done. In our roman. Joshua. Who has 358. Okay, now I might try to build in the same metric from the start, and I agree with you will do much better if you could. Bill then the fame at three in the neural network from the start. Just restricting it could be space for the substrate, then you have to do it. But I probably gave you in are you will gain enormously in the efficacy or the neural network modeling because you can see what's going on. In our index case, we have four pieces of pizza, 0, 1. Now we know that important thing, probability then it may be WHO has one node on each part of it. But now the neural network has to fit PO know, the big thing is repeated after they good enough to explain for vagal thin two-dimensional plane or something, which is much more work, you need more Fourier component, similarly in more depth and depth. And it's waste of everybody's time. Because you are. But if you're not your thing, say mu3, just trying to fit a more complicated object, which means if it's polynomial or polynomial, whatever you think Chebyshev polynomials, it set to work much harder. And that's what happened to him with acceleration of convergent by using discrete places. Tried it, a try. We'll tweak their skin applying the knife T3, thick, fit my group. And you get enormous amount when you do this, but you have to use irreducible representations of this group. And then suddenly each one of them has a very smooth distribution to deal with. That if you had a whole lot, they're wiggling all over the place with PDFs that belong to different representations. So you have the other polynomials to fit the same data. So I totally agree with you. I'm looking every day, I'm going to another machine learning feminine arithmetic, even spatial tempo. I'm actually doing. Has anybody done it? Or why are they traveling? Talk of this type. So two years ago, it was pre-pandemic. So while it's traveling, I'm asking. I don't know if these were just saying and these neural networks is some sirtuins diffeomorphism. And so I guess ultimately what we want is a diffeomorphism that, that respects the symmetries and eat it too. I guess we haven't been clever enough to get in the way you're doing it. Safari. Be the orbits, find a function basis which is incorporating the thiophene. So if you have translation, we go to Fourier modes. If you have translation with reflection, the only pay costs money. So in terms of some kind of linear basis for the function of space, we tend to know how to do the femur. Because with toys and fluctuate the different elements, split, planned it to create a method. And we can construct protection not correlate with the project, only fully thematic or pull in. And with neural networks, their phone nonlinearity that you don't see any way of flipping them. Input thumb that you can add up and subtract and stuff like that. I don't know. Maybe maybe there's a way. Clearly there must be a way. I would think that there's a lot of money and if they continue to have lots of problems and struggles, which is machine learning. But they're much more obvious. What Daniel, you need to get that money or young professor that could be used. Integration by parts. What are the integration papers? That integration, integration of immigration, let's say, Well, they don't want any foreign narratives around here to break into our country. But thanks to this formula, guy, I'm a citizen. I started feeling sorry for Q0 are actually applied for paper. Until then I will have to then think about math. So good luck with it. Thank you. Okay. I think we have group meeting now, so we're going to run. Okay. Thank you, everybody. And thank you. Meet in a week.