[00:00:05] >> I'm going to tell you today about work that I've done over the past I don't know if you 15 years and I've By now I've collected a large number of former collaborators each of whom has produced noticeable and important. Contributions in this. You'll hear a lot about Katharine Quinn's work about Mark trance terms work and about been locked does work in particular today. [00:00:33] This grew out of work on systems biology. In which we started looking at their sets of differential equations and and and and discovered very revealing properties about all hard is to measure parameters from emergent behavior and I'll start out by talking about parameters about about how how a few parameters combinations can describe a complex system and how that's connected with physics where we're quite used to the fact that continuum limits are simpler than the microscopic then try to explain how the same kinds of simple simplicity underlie physics and more and more less controlled fields of science by thinking about the differential geometry and space of model behavior which we call the model manifold. [00:01:43] And then finally I will try to make connection with practical problems in physics. I'll try to I'll try to. See things where I think applications in physics of things we've discovered in this in this journey in particular in model reduction and removing parameters from complicated models to make simpler ones and in applications and statistical mechanics in visualization zation of the parameter model behavior so to begin with I'd like thought I'd give the big picture of physics of science in general and how they are related in physics we have high energy physics or high energy what happened here and there I lost my mouse Ok so I can't point can you see a mouse Ok. [00:02:52] On the left you see the. High energy physics was shorting this equation at the lowest energy scale and and all kinds of hierarchies of emergent theories above them they call them more and more fundamental theories on the right we see shortenings equation at the bottom end and simpler theories emerging on lower energy scales to the simplicity in high energy physics the elegance increases as you go to higher and if you scales. [00:03:29] And in and in the condensed matter Sensex it increases you go to lower energy scales but they both have the areas that can be derived directly from the neighboring theories. Using controlled approximations and small parameters for us using continuum limits who are novelization group. What about other sciences if I go to Systems Biology or or are or you can nomics modeling or or. [00:04:07] Or power systems how would I how do I justify my existence and deriving models from one another is not all that physics does We also explored new thing and try to figure out what's happening to reductionism isn't the whole story we also have a sort of synthetic methods of thinking about things when you say you understand something it doesn't mean you've reduced it to shrugging his equations even through an elaborate number of of. [00:04:49] Well controlled approximations sometimes you understand it because you basically figured out what's the key the serves the circle call kind of model if you made a simple model youth in fact that physicist like the simplest model we don't like all that details about the real world and ends where there were known for starting out was fearful cows and adding the years later. [00:05:16] And much of what I'm going to tell you about today is trying to connect. This idea that simple models. Which capture the essence of what's going on emerge also insistence biology and in dynamic and in and in power networks and in perhaps in economics in every field that we've looked at when you have many parameters but you have a non-linear theory that depends of lots of of of a basic parameters the 1st here's 1st this is biology model. [00:06:05] My grad students drag me into talk to these this is my ologists and they would they would. They would tell me about all these proteins and all these interactions between the proteins and I gradually learned a few of the names of the proteins but I never really learned the amino acids anyway and this is a system of biology model where there's a membrane information is being transferred from the from the membrane to the nucleus through a series of chemical reactions and and we wrote I write down all the chemical reactions on the right in the font that you cannot read and so I'm going to look at one particular one and and you can see it's a nonlinear set of differential equations and for this particular reaction there are 6 unknown constants and overall there are 48 on the on constants and you might be suspicious with a model with that many free parameters that you better measure all of them before you start doing anything because you and who knows what'll happen. [00:07:16] And and I and that was my immediate reaction I said so through what can tell me about the 40 parameters and he said I can tell you within a factor of a 1000 what they are and I said that's interesting you expect us to make predictions and he said I think we can make pretty good predictions who just within the parameters until it works and I said but you have a 1000 dimensionally you have 48 dimensional space of parameters how much data do we have he said 16 we ended up with 63 data points so that's part of the story can we may predictions with a model that many parameters that makes sense so we fit the model to the data and then we calculated by various means how much are we can trust the parameters weeks tracking and this is thing on the on the on the on the upper left you can see our prediction errors range from one 5th factor of 50 to a factor of 500008 factor of 50 you couldn't tell if any of the parameters better than a factor of 50 in most of them were essentially unconstrained and then upper right you see. [00:08:42] A prediction of an experiment this was an experiment that was you know biologically important and and and and the the prediction of my expert colleague was that when you did the experiment both of these curves would look the same and they clearly look different they're basically unchanged from what they were before the Today intervention the rapid a drug to disable one of the proteins and and and what's amazing here. [00:09:17] First of all we did the experiment and we got the right answer. Which is not a big surprise it is a big surprise and Rick Syrian was wrong about anything but but. What is sort of a $5050.00 chance you can say the mazing things we could extract a a a prediction with a smaller than 20 percent error range from a set of parameters each of which we didn't know within a factor of 5 take the factor 50 and and the answer turns out that certain parameter combinations were well constraint on the lower left you can see a contour plot where the contours are roughly speaking how well the parameters are constrained and you can see the vertical direction stiff in the horizontal direction is sloppy and the individual parameters all have projections along the sloppy direction and so you can't tell what they are but you can tell this one combination of parameters pretty well. [00:10:29] Jim can I break in here. We had a question from Mike to Kautsky asking you're talking about fractional errors here so error presumably divided by the parameter itself are there any parameters are just very close to 0 and these are all positive parameters all the action rates are binding constants and they're all they're all you can talk about them all being all part of them but they have different units there are several different units but once you take the log c. in this trouble so that's what we use log parameters exactly right into print. [00:11:13] The art that's why this particular system there are other systems where the parameters aren't all positive but that it doesn't it doesn't seriously change the the nature of the sloppiness it's not something that's needed to apply or methods it's just it just happens to be true here. I should mention one thing about the lower left picture lost if you haven't noticed. [00:11:42] But the axes of the 2 directions are different the horizontal axis is a 1000 times bigger range than the vertical axis so I usually talk about this one it's on a big screen where the vertical horizontal axis would have to be stretched a kilometer before before it would match but here it would be only only a 3rd of a kilometer or so. [00:12:11] But but it's very very sloppy with rotated the axes here so the sloppy directions horizontal but you shouldn't think of it is being it at some funny angle in the parameter axes as being more. Along the horizontal and vertical This is not new it's the people with multi parameter models know that they're ill conditioned they're only of. [00:12:36] That that that it's very hard to measure the parameters given the behavior what we found that was interesting is that it looks like a hierarchy and to look at the hierarchy of stiff and sloppy directions I want to go to the I can values the cost has seen so I'm going to take the lower left picture where you have the exact Lipson I'm going to take the major lighter axes and I'm going to look at at how how how many. [00:13:07] Field conditioned. Sloppy directions there are and how many stiff directions are so here buncha models. You notice cell signalling was a model I was just telling you about quantum way functions this is. The most well. Quantitative measurements of. Quantum things for small molecules are done using quantum money Carlow in the salute very small wave function that starts it out and the coefficients the various the way function are completely undetermined. [00:13:50] At the cosmic microwave background radiation will return to. The ising model lead oxidation accelerators I should update this there are now power systems and other and other models notice I've got 6 orders of magnitude on the vertical axis that corresponds to a factor of a 1000 between the stiffens sloppiest direction most of these models continue far beyond that I notice that all of these models have roughly equally spaced I can values in log rhythm they in fact look a little bit like random metric SCIRI for those of us think about those things except they're not random and random in a uniform scale they look random in the log scale and and all kinds of multi premier models have this feature the we have Jim we have a question here which is more of a comment from Penn drag again I'm just a bear with an intentionally huge mathematical error just think it's been a letter in the same issue stating that it been wrong with the just consummate factor and the 100 there so that the conclusion of the paper is unchanged not me as a weekly smiley face indicating 0 degree of jocularity. [00:15:20] Jim Langer an old mentor of mine and and the Nambour Gul car a colleague of mine have every paper. With with real science in it with an error of pinned to the 50th for the vortex became great and and but but when you predict when you calculate an exponent and predict the exponential you can make a huge error and have the exploded quite correct and so it fit the data and then how bring him over to get out the error and and. [00:16:03] Never mind. If the mommies I've been stalking butter least squares models and I'm going to tell you a lot about the squarest models but little while I'm going to move towards things like my. And and for the i.c. model. I need. Gen x. American. I'm not sure I'm out here I think one thing I can do is silence you and then and then it will echo from me. [00:16:46] Just a moment I'll make you quiet so you'll have to yell very loudly if you want my attention. So that everything looks good. Anyway Brad Reg can you raise a thumb if you can hear me a you can hear me and see me I hear you so well. [00:17:22] I can't hear you I turn you off so you have to go Ok I think you. Can have the fisher information make tricks. That it measures the distance in a least squares model efficient variation metrics just equals the distance between the predictions of up your experiment divided by the experimental errors but but it's the natural distance in the space the probability distribution for things like the the the ising model or the cosmic microwave background radiation or something like that that doesn't have a natural simple Euclidean imbedding for the and for the space of predictions and been mapped to uses this and I just advertise this site because a really cool result that he's shown that that. [00:18:14] System like a car no engine demands a certain amount of entropy cost for controlling the instrument that moves you around the path in the engine and that entropy cost is is is related to the distance using the fish information make metric Ok so if I look at the. [00:18:49] So how I talked about sloppiness and parameters and talked about the parameters sloppiness is having to do with the driven if the cost Tessie in the matrix the eigenvalues and that fission phase formation metric is also the drift is of the cost tests Ian. And and. And I was claiming that this is true of systems biology but it's also true in physics that if you think about the ising model and you compute the fisher information metric and you find it's eigenvalues they are not sloppy. [00:19:34] That you the left hand side of this graph shows that they range from around 10 to the one to around 10 to the 5 but the 10 of the 5 is lying to you because we just with the that that eigenvalue is different units so we can move it anywhere we want we've got about we've got about 4 words a magnitude which is nowhere near snow $6.10 which is what we get but we know that the ising model if you measure all the behavior you can find the parameters back if you course grained there are irrelevant parameters and those irrelevant parameters become harder and harder to measure and sure enough if we course grand 4 steps we now have. [00:20:25] 8 orders of magnitude so the stiff directions from the sloppy model behavior correspond to relevant parameters of the sloppy. Directions correspond to irrelevant parameters for the Isaac model of things that don't matter on long length and time scales so continuum limits to normalization groom and things like that tell you that behavior on low energy so long length scale should be described by much simpler system with only a few degrees of freedom that matter 2nd part behavior so I'm going to now talk about the manifold of possible predictions of a model and that men fold is is a. [00:21:22] If I have a 2 parameter model I have a 2 dimensional surface in the space of all possible predictions if I take the model of fitting 2 exponentials I fit it to 3 data points at 3 different times it suites are 2 dimensional surface and 3 dimensional space. [00:21:42] This surface. Is has coordinates given by the original 2 parameters and so it's forming our kind of a manifold this behavior space. I wanted to mention a couple of things because they're going to be important as we go along one thing is that the manifold has edges so here there's an edge where one of the parameters is infinity there's an edge to one of the primers the 0 infinity corresponds to one of the exponential is decaying immediately to 0. [00:22:22] 0 corresponds to one of the exponentials never dying away at all and then there's a line where the behavior fulls over. Which is special to this model where 2 parameters are equal. The other thing I wanted to say 2nd thing I wanted to say is there's a metric that that you that's natural in the space which is the distance from a data point to the best corresponding best fit meaning that the the the model manifold metric which we talked about as the as the. [00:23:05] The I getting the the the metric quadratic form giving the the quality of the fit is is is simply inherited from the You clearly in metric in days face who from the from the sum of the squares of the distances divided by the error bars Finally I want to mention something as we're going to do is we're going to be slicing this matter for defining if I make a if I have this this manifold as a function of t. one. [00:23:43] Fit to t. want t. to and t. 3. These are predictions for the experiments I haven't done a t. one p. 2 in 23 if I if I make a measurement of why 2 at time t. 2. And I then than Then that leaves me with a new manifold which has been sliced along the. [00:24:05] Spiritual plane of constant y. to and and that slice then will give will give a. A manifold in the space one dimension smaller and the manifold will have one less dimension as well. As the model manifold it turns out the moment the model is sloppy some directions and parameter space don't matter as much as others that sloppiness corresponds to. [00:24:50] Then directions on the model manifold this is a manifold of possible predictions of. Sums of many exponential so this isn't principle 20 or 30 dimensional volume but it's got one long direction one medium direction and one very small direction I think will turn up the volume a little bit so I think that should probably work. [00:25:31] And then I can I can look at the I have a mouse now. I can look at the. Distribution of minutes of this manifold and Mark transfer of that this and he calculate the diplomatic fold with spy taking geodesics starting from a given point moving the difference difference lucky directions and and finding the length of the geodesic all the way until the parameters want to infinity or 0 and sure enough those with. [00:26:04] Showed the same hierarchy as the eigenvalues So we have now taken that statement about a quadratic form at a given parameter value and turned it into a statement about the manifold of all possible model predictions which is much more interesting both from the differential geometry point of view and from the point of view of maybe I'm just measuring my parameters and some long way to this is a clear statement that this is a geometric features sloppiness in the geometrics feature of the space of predictions it depends upon how what kind of measurements you make but not on on how you frame your model and the questions Ok so now I have to explain why typewritten and rough Ok I guess I should mention stock prices also form a hyper of and this is a cool project we did it doesn't really have to do is sloppy model since we don't have a model for stocks but but but. [00:27:24] But we we can take stock prices and find find the sectors of our economy just from just from fitting hyper tetrahedra to to the system and hypertext rebuke at very same Ok we have of the squares models we have a rigorous Hyperloop sort bound on the model that is to say I want to argue that the model manifold is is very black its heart is very thin there lots of directions and quite flat. [00:28:03] And and this is this is the the rotating picture of of of one particular model. And and we can we can buy all and any model that measured with the saying data points and with the same radius of convergence so we have to assume that the model s. is smooth not in behavior as you change the model parameters but in the in the behaviors you change the experiment in conditions that that you're predicting if you measure things at different times you should have analyticity in time and give it as a radius of convergence are we can explicitly give you this this direction is thinner than this direction by factor of this and they get inner and thinner geometrically just as we saw apparently I can explain roughly why it forms a hyper rhythm that each time I add a data point and slicing the model manifold so. [00:29:17] And we saw that in the case of the 3 x. a sum of 3 exponentials. If I add a data point 7 fitting points to a curve if I add a new data point. I will constrain more and more the predictions not only near that data point but also far away that that in that in particular there's interpellation theorem. [00:29:49] That the model for addictions r r r r balm did by this particular formula which is basically just the best polynomial. Inter plating between between the data points you already have. And the the error is given by the next order term you could add. Up and down so the if you have 5 data points the fit according to it and then the coefficient of the quintic is bounded by the by the radius of convergence. [00:30:27] Bound and and you get and you get. Every time you add a point you get a nother factor of Delta t. over the radius of convergence less variation in all the other predictions and then this is the data dashed line is our is our is our simple estimate for the rigorous by. [00:30:59] The black dots are calculational upper bound on any possible model and then these are 3 different kinds of models one taken from physics sums of exponentials one taken from. Chemistry and one taken from that the d.v. ology to show that that that the actual decays of I can values that we've been talking about sloppiness are in fact. [00:31:32] Simple results of our hyper Ivanov's Ok now I'm going to go to step back and I again would like welcome questions before I plunge into the last part of the 10th retreat we do have a question from Pedregon so in asking whether this does a relationship to s.p.d. which I think it's a good value to composition. [00:32:00] Most of the pictures I use use s.p.d. to draw the pictures because you have a high dimensional space and that's feeding very nicely sorts out what the important things are. In the later parts of the talk that I'll talk about with the Intel scheme betting also use something very closely related as foodie. [00:32:25] So but I think of that in the opposite order I think the reason that us media is so useful is that the things you're plotting are sloppy that that's media is useful when you have a huge dimensional space of points and the few dimensions dominate the behavior and that often happens when when there's an underlying sloppiness in the system and actually question so so you're going to competitions kind of like a generalization of an eigenvalue decomposition or things that can't be. [00:33:05] Roman notes that we squares directly. Yeah. Ok. So the 1st thing I like to do is is brag about Mark transoms work. So the question I get over and over again in these early years of these talks is can we course grained sloppy models if most parameter directions are useless why not remove some and and I used to say Well 1st of all you can't remove or a constant you can't set to 0 you can start some random constant but what do you gain from that in model complexity and secondly. [00:33:57] The experimentalists want to know what will happen no matter what you will make it big black box that behaves in play isn't going to be useful to. And then Mark Trents to figure out how to do it to you 1st of all he points out that the models with fewer parameters are the ones in the edges of the model manifold you saw that in the sums of exponentials where one of the premises go to infinity or 0 or equal to another one then you get a long one less parameter fast time scales and slow time scales and things like that and and so he said why don't you just take your initial parameters of the model and find the sloppiest direction and move along that direction to get the edge of the my man for really you move both directions and figure out which one is closest to the edge of the model manifold No There now you have another model with one less parameter you might think this is complicated but you did that and it was a. [00:35:06] Simple thing it wasn't typically one model parameter went to infinity it was typically a ratio of 2 would go to infinity at the same rate so that there. Is a ratio of 2 would stay constant while each of them went to infinity or something like that so it was a statement that some reaction the forward and backward reactions. [00:35:32] Were both fast enough so that you could basically assume it was in equilibrium and then you would do it again and you would remove another parameter and then you would do it again and he did it 36 times so. And you can see in the lower right here he started with the model with 48 parameters in the ended up one to 12 and the new one is not nearly as sloppy could you make sense of this did the final system look. [00:36:11] You remember this in the original system you remember all those complicated reactions on turning off and turning on all the equations you could read. The one after the course crank it does almost as good a job of fitting all the data because you just remove parameters combinations it won't use when fitting the data Jim I have another question. [00:36:38] When you say that the parameters are. Ignored or removed what do the mean when you remove them how do you remove them so I would take for example a a reaction where you had a forward and backward rate they were both diversion I would remove the forward rate the back would rate and just set the ratio of the 2 equal to its equilibrium current equilibrium concentration given given. [00:37:11] You know they can take an exchange reactance the products until they are in local equilibrium that that that's might that's not actually what typically happens typically says the McCandless Menton reactions which have which have a new waiter in and dominant are not linear either the mechanical smitten constant goes to 0 or the or or or it goes to saturation and gets rid of the like that once I think he said there was a combination of 6 parameters that had to be constrained to be before parameters or so anyway if that helps so in Co k. So in the end you get a set of. [00:38:03] Reaction. Laws for which the parameters are given explicitly in terms of the microscopic parameters. So this is saying if you take the 48 original parameters and you extract these 12 effective parameters all that matters is the combinations of those 12 so you can change any one of the microscopic parameters and they will make a prediction and or make the same prediction that you would have gotten from the original model I think this is just fantastic this is what you would want and you remember this old model rich Ciriello used to tell us about how this is the main pathway and this is a side pathway that that adds positive feedback and that was a negative feedback loop here and that's all that's left it's just amazing so you get a system that looks like the same kinds of chemical reactions you had before you get really normalized parameters you get parameters that embody a more complicated microscopic things just like you've got in physics so just like collisions of molecules and sup being described by diffusion consonant to the density and and a few other parameters and in revolt modulus and describing sound waves and and and and and and the resulting model looks like it makes sense looks like it makes a lot more sense in the original set of equations generally less another. [00:39:58] You know maybe not quite about that but you know I asked the interesting question about something I don't know much about are the parameter combinations you obtain this way related to the dimensional analysis I think you could do something like by 3 m. high I have no idea what Bucky empire is but it's there there are there are a bazillion things that look like this. [00:40:29] In We're normalisation group there are a lot of variables in course grading their higher grading in terms of non-linear terms that become involved on important Yeah Innes or in in dynamical systems their separation of time scales you can have fast variables and flow variables new can get affective Geary's with just the slow variables even in general relativity you can take the livid h. Bard. [00:40:58] In in theory the universe you have to gauge far to go to 0 and get general to the you can dig ged and send it to 0 and get special relativity and then you can be exceeded 0 it's it's it's it's it's similar in spirit to all of these things but it's much more general and and and I can say no no single separation of time scale kind of method or something like that. [00:41:28] You can use separation of time scales you can make all the time scales the same in the smog will and it's almost a sloppy it gets rid of a little bit of sloppiness it's no one explanation. Can be used to to do this kind of model reduction you really need to do several different kinds of of of things it's it's it's emergent just because the. [00:41:58] Model is describing things smoothly if you like these are boring models that don't do much but most models and physics with 40 parameters don't don't ban the 48 dimensional space of of the of things that they do otherwise they'd useless. I described building and you are describing a clock or or an old fashioned Swiss watch as as a as a multi parameter model every last Bramber recipe right unless the watch doesn't work that's not true in biology it's not true in climate there are a lot of parameter combinations that could have been different that would still be basically saying we have a question that came in from someone who does study by article systems and you might have already answered this today but his question is is your counter direction in effect similar to the preamp of a gram assumption that some action steps are much faster than others that can be used to. [00:43:06] Reduce the soonest system of equations and has been used before he wrote those equations McHale's man is such an assumption it's assuming that the dollar is it's a different assumption of the assumption that the enzyme is in short supply but separation of time scales is a mechanism for generating sloppy directions but we can make all of the time scales here all the same and it's still just a sloppy almost a sloppy we go from 14 orders of magnitude to 9 orders of magnitude or something like that I don't remember the details Eric said you're told to do that he was sure that this was the main reason we were having sloppiness and it wasn't. [00:43:56] We have. Ramen with the simple use for comment but my campfire is basically a notion that things have to be consistent dimensional analysis and then that drag with the more error our current comment that it is how mathematicians make their living taking something they already know we're all a scaling changing its name and called it even though we don't understand anything more about it. [00:44:18] And then going back Jeremy Harris says that there were that this reduction method in general club and the question came up then that if you started on a different ribbon would you still get the thing reduced to the same effect of model I think. The last question I'm not sure what you mean by starts with different ribbon but I always worry that we could get an variety of different models all of which describe the data by taking the 2nd sloppiest direction of the 3rd slot These direction in the 1st lot these directions to reduce the model. [00:45:01] And I don't know if anybody has actually tried that. And I and also this one of the time stuff doesn't work if you have a 1000000 for amateurs I mean one of things is machine learning they have 1000000 parameters to describe you know cats and dogs and things on the web and you would very much like to do one reduction on that and the reason it works I think is again sloppiness that that is a low dimensional effective model of what photographs look like and you have a high dimensional. [00:45:39] Space of model predict predictions but it's non-linear and you fiddle with it until it matches this low dimensional space and you're you've got a 1000000 parameters for something only really has to have some 1000 or different important directions. It's still a really impressive job but wouldn't you like to have a much smaller model. [00:46:03] And I think. Marc transfer was working on on methods of removing parameters wholesale and maybe not doing geodesics Jerry asked another question here that I think it might be another hard question is that are there different paths but then Simon I think I asked a question at least a better our you know that in most of the systems there's some kind of feedback and then you need feedback to get the bidding explanation if you. [00:46:40] I don't think the early universe that I'm going to go to next has any feedback in it. That I think I would violate causalities not never mind here we are with the with Katherine Quinn's statement Catherine Quinn was looking at the cosmic microwave background radiation because I was always concerned that it had the 7 parameters and didn't look sloppy and so 1st of all she calculated the fisher information matrix for the cosmic microwave background radiation and sure enough it was like Ok so now what do you do next you make the model manifest you look at the model of the cosmic microwave background radiation which is a lamb to see a model and you and you vary the parameters and you look at the predictions except they're not predicting that data point they're predicting cures for the early universe and I kept asking to do it in caps and kept saying put it but think this this just between pictures is not as least the sum of squares it's not you then don't take the difference in the pictures and square them you have to do something else and she slowly convince me that we really need to think about the whole probability distribution and sure enough. [00:48:02] We can we can calculate the the. Distances between things. And she used a very clever method having to do a replica theory and she got a Bhattacharya divergence and then she found a way of visualizing things using the Bhattacharya divergence and here was her picture and this is so this is our universe I'm sorry you can't see my mouth only you can. [00:48:34] Now disappear again Ok. Then she did the icing model and she go into a little bitty ising model because of course finding the entire probabilities abuse and you have lots of pixels here but you but but they're galaxy and here the anyway she did a 4 by 4 easing law and then she has this great picture of classifying. [00:48:58] Handwritten digits which I now notice I misspelled classifying. So let me show you. This is the watching the thing learn the digits and this is my description of replica theory but given the time I'm going to show you the sneaky thing we did instead. This is something that honking. [00:49:24] Joe. Stumbled across I was asking him to find the model manifold using. Different metrics and he tried not the Bhattacharya divergence but something some of you have heard of which is the call by people or convergence my the charge is very obscure Co but leaves a lot to people use in this kind of field and and when he made his predictions he got this this this. [00:49:59] This him azing embedding that info or dimensions. I've always wondered you know whether public models also for hype ribbons hype or ribbon is one that has a long direction the median direction a small direction a smaller direction going on forever and and and and I felt sure that this would be true of the ising model because you have all these irrelevant directions and sure enough the fisher information metric sort of told me that but it turns out for broad class a probability distributions which include all the stat make models you can you can write a Isom metric I'm betting of the model Nettlefold using explicit formulas that are easy to compute So the 2 of the coordinates in the space are the field plus the Magna station the field minus the magnetization and and and and and the other 2 directions and the field is are the are the are the temperature and the and the and the energy and and the album are going to have to wrap it up here we're going to want to take a team are you going to limit leg breaker question so this is something you should look up if you're interested in understanding. [00:51:35] The differences in pin. In predictions on in step make models you can draw pictures like this all right questions Ok let's 1st of all let's everyone get Jim thanks. All right. Mark questions here so I think the we could we could just have you come in on the audio if you like were real real audio would be great. [00:52:08] So Dan asked how it's nice how do you decide when to stop producing them. Mark did not only eat up to 12 he did after 12 and then he went when he removed the next one it didn't fit the data nearly as well but then he asked can I reduce it more if I don't use all the experimental data but I only think of this is an input output machine and he got it down to 6 and then at some point he did something else we got it down to 2 so there is no sharp cutoff this 2 in the original model all those eigenvalues are roughly equally spaced You can see these are the floppy ones in these the stiff ones it depends upon how fussy you are you there's there's no natural break it's unlike separation of time scales and things like that where there's usually a cluster of things that are really on important in the cluster of things that are important so I had a question. [00:53:16] When you started you kind of talked about the sort of more fundamental problems and then you kind of went into like. The models that you have which are sort of for more emergent phenomena but I think of some of the things that you started with on the high energy scale and after much about amperes string theory those as I understand it have a lot of parameters in them or even something where you can do more calculations practically like Latitude you see has a lot of parameters in it so is it is it true that those some of those are slobs are and that I mean nobody nobody has more parameters than shorter shrug of her you've got all the mountains of all the nuclear you've got on it I guess it's not true chemistry is more parameters I'm sure of it if you if you go to chemical reactions and bonds and things like that they're just bazillions of parameters. [00:54:12] So so it's my understanding that one of the things the high energy physics are proudest of and you can ask your colleagues. Is the fact that they can explain what the parameters were using the higher energy theory sometimes you get a few more parameters in the high energy theory that are describing things that you didn't have to describe before but the up cork and adult court maps are all you need to describe and the electron mass just describe everything lower and all those extra particles after that are. [00:54:48] Are. Are are useful in describing things that only happen in the leaders in the 1st instance of the universe so they don't you know nothing like string theory right string theory has all these parameters right and one of the things and I think there is to try to get the right eye and you know to get the high energy physics that has the right up work math right down quark mathematics I know that I mean I'm I think I think they're not parameters I think the problem is that there are a giant number of different spring theories but there is not to say have a free parameter in the each of them is kind of rigid It's just that you could you could develop a different top logical came manifold or whatever it is but I'm not sure this is not my field but but in between I know that electro week and and and. [00:55:55] And Q C.D.'s and things like that I mean you see tells us all the in principle tells us all the masses of all the nuclei and has you know quark mass the glue one couple ng and a couple of other things. And it's it's a marvelous condensation of the number of parameters similar to how the ising model has only 2 parameters embodying all the physics of of interacting the electorate moments and and and and things in iron and on iron because it doesn't drift from the ising model but in whatever dogma it's breaking and I saw 3 so it was icing very similar Thanks John do you have any questions for Jim. [00:56:59] Jim you know you are not a one man guy that says every have. To wait for turbulence and if they have that power by saying card I managed to pick out example. Or you have that support us to look at this because you know. Personally in a car there's an addict is going to have very precise knowledge. [00:57:32] Ok so you might want me to look at fake chaotic behavior and tell you. How to analyze that but I can see a different thing that there are very complicated. Models for neurons and for heart tissue would lots of parameters. And then there are simple models that capture the basic physics that that the heart is an activated the issue or the nervous and activate a whole system that passes things along and in the reduction of the my logically complete model to the physicist kind of durable calm model that's that's exactly what we're talking. [00:58:29] And I could imagine taking. One of my colleagues Chris Myers. Was doing epidemiology and one of his credit students had a really interesting idea but I don't think he ever fleshed out which was to take one of the you know really complicated epidemiology models where you corporator all kinds of real world phenomena and and and use our model reduction method to say all of these things can be embodied into an effective parameter block and all of these things single bonded this spectrum parameter and come out with one of the models may be that that that has been studying the past the you know sie Our model has susceptible to infected and recovered or or maybe just looks like completely more complicated ones or maybe even gives you a simple model that's that's that's structurally a little different but in any case trying to trying to get the. [00:59:37] Mob living bodies the big picture and maybe gives you a connection to the individual models the same way that you might you might write formulas for the diffusion constant and for the pulp modulus for gas Thanks Jim and then parent can have another question. I'm wondering whether you can make any observations about the problem of hers that survived the model reduction for example the problem is are they associated with active or passive steps or are they more evolutionarily conserved. [01:00:24] Well we spent a lot of time thinking about that in the early days and it did seem like the stiff direction. Early days before we knew about the mugger addiction and the stiff directions we decided were related to. Unquote genes and reaction pathway. We haven't been spending much time thinking about that recently partly. [01:00:53] Because we're now doing so many different kinds of models. So that you know. Evolutionarily conserved own longer makes sense if you're looking at power systems. I would I would think that could be an interesting topic that could be very interesting thank you all right. Thanks Jim again. Thank you.