Welcome everyone to Georgia Tech Colloquium Series. I'm here. My great honor to introduce Emilio for air. To give this week's colloquium. Emilio is a full professor at the University of California, Davis and their Department of Psychology, where he spent the vast majority of his postdoctoral career. Emilio is a leader in the field of developmental methodology and dynamical systems. Particularly dynamical systems is applied in the social and behavioral sciences. He's the author of more than a 100 papers. His work has been cited more than 1000 times per year for numerous years in a row. He's the editor of at least three books, one of at least one or two of which are on my shelves currently. In addition to that, he's been on the editorial board for the top methodological journals in the field methods, Psych Science. As a statistical advisor and structural equation modeling. He's also an elected member of the very prestigious Society for the multivariate, society for multivariate experimental psychology. Also the recipient of that of snps Early Career Award to Raymond be to tell, Early Career Award named after the founder of that society. Beyond this high level of scholarship, he's also an avid lover of the outdoors, and I'm told has hiked every single trail near the lake Tahoe area. So you can ask him about his hiking at some point. If we run out of questions, perhaps in-between hikes, He's also made time for excellent mentorship. Numerous of his former graduate students and post-doctoral students are in the midst of outstanding careers and well-known scholars in their own right now, which I think is sometimes undervalued in academics where it's actually his students that are going on to perhaps one day outshine even, even him. Emilio seems to have figured out a way to get 50 years worth of career done so far in about 20 years. And today he's going to be talking to us about some ways of getting 50 years of data collection done in say about five. His talk is entitled Modeling developmental processes using accelerated cohort sequential data. Without further ado. Welcome, Emilio forever. Thank you so much. That was very kind presentation. I'm glad that he's being recorded because I intend to show it to my kids when they say that I have no idea what I'm talking about. He finally got has that he's going to separate purpose. Thank you so much. I'm I'm very pleased to joined your colloquium. I gave a talk at Georgia Tech in the school of psychology many years ago, invited by crazed Herzog and I have very fond memories of that time period that given the current situation, we have to do this via online. But he's much better than not doing it at all. And I look forward to the opportunity to visit, visiting you in person. Okay. So I am going to discuss a number of different papers. I'm trying to summarize a number of different papers data, which I'll be working on mainly with a former post-doc of mine, a lot of the astronomer who is now a professor at the University at Universidad Autonoma de Madrid, Spain. So a lot of the work has been done by him and I want to give them enough, enough credit. Any has to do with our modeling developmental processes. In situations what we have, these data are from an accelerated cohort sequential design that I'm going to explain in a second. And I hope that you find some of these interesting and useful aspect. So I'll be talking about these types of the signs, the type of data that we can extract from that. I'm going to discuss some models that I think are very useful to examine. Questions. You've seen these types of data. Some problems, some problems with these types of data that needs to be investigated. Then I'm going to discuss some simulation studies that we've been doing to address for this question. So that's, that's mainly the outline for the topic. I don't know what is your protocol as far as I'm concerned. If somebody has a question in one and wants to interrupt, AMA is totally fine week. So please go ahead. So imagine that you are interested in understanding the development of the process, Let's say cognitive. Cognitive development. Here I am depicting data. This is actual, actual data from above 450 individuals measured every year. From first to 12th grade. And imagine that you are trying to understand something like this. You're trying to study. They did this process, how we doesn't fall over time, possible underlying mechanisms, et cetera. We'll obtain data like this. You would need to follow every individual for, in this case, our 20 years. That, that is, that is that Facebook, all that happens very, very seldom are is, is, is not feasible from a standpoint of funding. Nih is not going to give you 12 so far meeting my eventually, but you cannot start like that. And then you have a lot of issues related to attrition or recast effects, et cetera. So what would that so then what can you do? So the question is, what can you do when you're trying to study a process that unfolds or say, a period of time longer than the period that you might have for funding, which is typically four or five years 11 approach. That was proposed many years ago by, by **** fell among others. What is called the accelerator longitudinal design or cohort sequential art. The side. Here, you are going to measure a number of individuals or 2345 types ish, a limited number of assessments, but not everybody enters the study of the same age. So as a consequence, when you put all the participants data together, you are covering the entire HSPA. So this is an example of that. Here we have data from two, from individuals, either one time point, these are the red circles. And I'm gonna see if I can put a pointer here. So people can see better. So the red circles, all deadlines. These are individuals who have two time points. So with only two time points, given the variation in age of individuals, I could address questions that go from childhood all the way to late adolescence, early adulthood, without having to follow individuals since they are from the time when they turn five all the way to win when they turn 22, that would be really on with it. Now, even though we have two time points, I could ask questions about time one and time two, some sort of pre-test, post-test if you could ignore all the takes in-between. Here I meant to include first, second measurement location. And you can address some questions about pre, post and the change in between. But if you are in terrorist step, a developmental process and understanding how this process unfolds. You might want to have a meaningful metric. In the metric could be chronological age, could be developmental age or some other form, but not necessarily paying for their time, please. Just an inch to men that allows you to obtain, to obtain data. This is even more apparent, are always as is more apparent when, when the etas plan covers a much longer period. And this is an extreme. We have two time point data for a number of individuals. And when we plotted by age, we have the entire lifespan. We have data from individuals ranging from two to 98. So this gives me a very, very different picture than if I plot it. Bye, bye. Thank point. Accelerated longitudinal designs have been used a lot in the literature are, but I think there are underused. And I think that some of the properties that they, that they entail are not fully understood. So here we are trying to adjust and some of this some of this one of the main issues that I'm going to discuss is the issue of what is called convergence. For equivalence. Which is that we need to assume that a common trajectory applies to everybody. In other words, if I were to follow the individuals who are young here until they are all. They, us all does this in the ask these individuals. The data would look like here. And if I had, had the chance to measure these, all the individual when they were young, they will look something like that. In some cases, those assumptions are testable. In some other cases, testing those assumptions is a little bit more complicated. But this approach is with my colleague Sylvia bouquet at UC Berkeley and I did it in order to collect data. And we were interested in, in, in the development of fluid reasoning, the ability to solve problems and novel situations from her childhood into adolescence. So we collected data on the number, psychometric variables and brain data as well. And we have three time points of data. The first time point was in blue, the second in green, and the third in pink. And we, we design these, are these framework with the, with the idea of obtaining a roughly equal amount of data points per eight payment. Of course there was attrition. So at time one, we have 202 individuals. This gets reduced to 123 or type 2 and 271. And there was a large variation in age. And we also allow for variation in the time interval, interval between, between our assessments. But this result in data, they look like this. This is a typical accelerated longitudinal design with in which we have either 12 or three time points are. But we are covering a much larger AHS, finding this case from above 56 to about 18, it late-night entity like PID. And again, the main or a creek or assumption here is that all the overlapping cohorts, a common longitudinal trajectory. There is no, there are no cohort effects. As us, as I said, the younger individuals would look like the older individuals when they grow, when they grow and they, older individuals would have looked like the younger individuals if we, if we could have measured the type. If that is the case. If that is the case, then I can come up with a trajectory that is going to look, in this case, something along these lines. And these type of trajectory which follows are a form of the exponential family. This is a common function. Did you, you see underline most developmental processes that most developmental processes with regards to cognitive abilities, you look at memory or any kind of achievement data is going to look something along these lines. So we have a rapid where we have a rapid increase in, in whatever processes. And then it leads to some sort of acceleration in early adolescence, late adolescence. And if we could follow these individuals over time, they will start declining a little bit older. I'm in a little bit all ducks. Depending on the variables, we will observe that in their early 20s if other, other variables or other constructs such as vocabulary knowledge. But we would be able to maintain them at a high, at a much later age. So when I'm going to argue is that if we want to model data like this, we need a model that can capture this type of arms. And, and that when we think about growth curve models, are, I would say that those data does not in ideal framework. A leaping change score model is, and in fact, this model was developed are precisely to have examine exponential beta. So I don't know how familiar people are with these type of models online to describe it. Briefly, us he null path diagrams that I'm going to show a squares will present observed variables. Circles represent latent variables as they try. I got represents a constant. In Andhra, one headed arrow is going to be in either a factor loading or if regression coefficient, a double-headed arrow is going to be a variance in this case, or a, or a covariance. And we're, what we're doing here is that we said that for any given measurement occasions, say y at time one, we have y at time 0, line. The way to time t, it had any given occasion they observe the score. This variable that I'm trying to measure, that is fluid. Reasoning, is going to be a function of some late and cost track. This is the latent, true our construct, if d, If I had several measures, I would include them here and I think that would be separately step in this particular case, I don't need to, so I have y from 1 to separate between the latent variables are represented that construct and the residual variance or unique variance. Okay, so that's it. Then for every repeated occasions from time 0 to time, from time one to time two, etc. I'm going to model, I'm going to model, I'm going to create a latent variable that represents the late and change. If the regression coefficient from here to here is one, and the regression coefficient from here to here is one, right? If I am predicting this variable perfectly, these represents the difference between the two, and that's exactly where it is. We say late and change the score. Sometimes it goes by the name of late and different differences score. And these are the outcomes that I want to examine. The late then changes to these model is equivalent to a discreet of first-order difference equation. And in this particular case, what I'm saying is that changes at any given time are a function of where the system was the time before. Blood. Some additive component, right, that is added by a constant value at every location. Okay? So first one are our possibilities. Now, this is considered a discrete model. All individuals are expected to be measured at the same time point. But, but in real life, that's not what happened, that's not what's happening in our data dash that what happened in any data. We don't measure individuals on their birthday. We don't measure individualistic in their classrooms. The very first day of school. There is variation in the day when they're measured. There is variation in when, when you are, are, are six or when you are sending, when are we measuring here? And here, we need to treat these as discrete intervals. So one possible solution for that is create what is called age of beans. So now we're going from Time 1, time 2, time t. We're going to our age, our n and some approximations are easier to understand if you have five potent one. Well, we're gonna put you into the five. If you are 5.9, we're going to put you into their, into their sex. And then the more on this exactly the same thing. But now they underline metric. The underlying time signature is age. And this approach is a little bit more the idea that we tried to follow, understand them how these changes unfolds over time. However, you might realize that we introduce in some sort of error, some sort of variability because we are artificially shift in the true age of the, of the individuals to one. Or perhaps better solution is to model the TAs in continuous-time point. This has been proposed in various forms in the, in the literature. And one of the main are responsible for us being able to implement that is, is you're very hot. My Kanter who has actually been very helpful to us how on our, on our projects. But the idea of continuous time model in, intuitively, this very natural, It's marbling differential equations. So we're talking about changes that occur continuously. And, and, and we can think of any specific measure. Any measurement or kapha is simply a discrete realization of a process that changes continuously mean. It makes sense to think that from the theoretical perspectives are psychological constructs change all of them or if not all of them, many of them in a continuous fashion, which has go and measure whenever we can. But between our time's going to measure the individuals, they are changing, they're maturing, they're learning that they're there. Chemical reactions in the brain. There are physiological changes in the brain and these happen in a, in a continuous fashion. A couple of potential advantages. One is that the estimates from continuous time models can be transformed into any specific time interval. Remember, we're talking about continuous time models, so they change represents an instantaneous piece, right? So this is an infinite list me a small piece that we measuring it back and be rescale into whether I referred to a month or oily. So this, at least in theory, can be done with continuous time models. And they also account naturally for different time lags between assessments across time points and individuals. Because not everybody is being measured are the same within the same intervals. Some individuals were measured almost two years apart. Some individuals were measured when you're apart. Some individuals were measured from time one to time two. They had a year interval from time to time 3, they have two years to interval, and that needs to be taken into account. And thus, what continuous time models don't. In particular what we did is to model these, these two, to specify these late insurance or scor model. In state space uses state-space models and this is based on the work by, by, by my counter and manner of our clay I'm on I'm on others. It has some differences, but I think that's does not. The most important part here. Perhaps the important part is that in discrete time, we are thinking about a different situation. And in continuous time we thinking about an ordinary differential equation. Oh, yes. Move on from that. There's a question in the chat from Chris. Chris, do you want to just absolutely, absolutely. Yeah. Yeah. Yeah. You are talking about changes and save that value that a person has a particular psychological construct. But does this model wow, for variations in what that psychological construct means across ages. For example, self-control, a good measure of self-control, different for a five-year-old than it as a 12-year-old? Do we have to would you be able to account for those differences in these models or is that something that we would have to assume that that's the conceptualization of the carpet construct remains the same over time. I think that's an excellent question. So let me rephrase that. Crystal, make sure that I'm capturing exactly what you're saying. I am saying that one of the benefits of the continuous time models is that allows us to rescale the change? Yes. Independent know when or how individuals are measured. Are you asking what about if the contract itself changes developmentally us self-control task? And the answer too dark, then I will give is that the model does and now that the model applies, the niches that you have. And I think that in order to capture what you're saying, you would need to include measures that are presumed to capture the contract, even though the measures themselves change, right? If you thinking about teaching mathematics are well in first grade, you may do addition and subtraction. In high school. You don't do addition and subtraction. If you use addition insubstantial, you might be measuring some finesse that now, mathematics already. So you may, you may want to use maybe derivatives, our calculator. So you would need to include different measures that represent the construct according to the given according to their developmental age. Time measure. You would need to account for? Yeah. Yeah. Yeah. Yeah. How other otherwise, the model is not going to tell you that the model is going. I believe there were you given us as input. The information that you give the list is the same thing. So it occurs to me that you could do I mean, they're they're possible ways to deal with that. And it's to include some anchor items on our curve. Our measures that change developmentally. But I think that's true no matter what model I use and where they use an autoregressive model, a continuous time model, or a growth curve? That does does that help? Does that answer the question? Yeah, absolutely. Thank you. Okay. Yeah. Thank you. Thank you. Might because I cannot see the chat. So if there's some dude, please feel free to stop. Okay, so this is a representation of our late ancient score model in discrete time. Asif us a vector autoregressive model, that the transformation is very easy. And then in, in state space model is a little bit harder to represent. And by that I mean that the path diagrams no longer have a direct algebraic representation and thus particularly true for continuous time model. Okay? So, so the question is, the question is, is this, if I really have a continuous time process and because I never measure individuals continuously, I just have this creed Measures here and there. Deny recover the true process that is generated continuously. So tacit work, and this is what we did. We simulated data from individuals ranging from five to 19, and we measured them every month across these 14 years. And the data look like this. So we have 500 individuals. We were lucky enough to measure them in our simulation every month for 12 years. And then the idea is if we were to drop a lot of data and what remains is what we typically collect in we still recover these trajectories. Is that true If we use a discrete model? Is that true If we use a continuous time model and dropping the data until wood remains represent what we typically collect. For that, we thought about. When I skipped this, we thought about a number of measurement schemes. The first one, the design one, is we measure everybody every year. But that's not typically what we, what we collect, but that's can serve us. Benchmark was a was set as the threshold. So that's the design number 1. Design number 2 is now we are. So in other words, for one is we dropped 11 data points for each year. And we kept, she has one measure per year. In the design number 2, we did the same thing, but now we kept a measure every other year. And that this is also like a pseudo sanity check, that the remaining schemas are more realistic in the third one, which has kept two measures per individual. Now that we're separated by a year. So for example, there were some individuals were measured at 5768 and resulting in data that cover the entire age span. The next design, we kept three consecutive assessments. So three assessments one year apart are three assessments now every other year or assessments in consecutive years. And the last one is, well, we think that most of the action for these changes happen when they are young. So let's put more measures. So imagine that you have this data, this is the reality and then the grade dropout at random. To result in this sort of arrangements. Can we recover the parameters of the model that generated the data? So this is just a repetition of what I said. So let me, oh, and the last the last sort of condition is that we also took individuals on the very first month of the year or every year or anywhere in the year. So would be the first but let's think about January versus anywhere between January and December, which is a little bit more realistic. So does it work? For data analysis, what we used for discrete time, we use a late inches score with a structural equation modeling this specification and in open Emacs and ending in continuous-time, we use a state-space model used in a number of functions. In open it up and mix, which are Mike has been working on for a few years now. And these are some results. Let me explain this because this a lot here. Here we are representing the relative bias of any given parameter of the seven parameters of the model, the self-feedback, the mean of the initial condition, the mean of the additive component, variance of the initial condition. They're very has the added component. Residual variance. And covariance are this case co-relation between the initial component and this slope. The first row are simulations with 200 individuals. The fifth row, similar dishes with 55 that thirst and matter much. Then for each simulation we have our 123 all the way to seven represent the seven measurement schemas that the signs. And here we have fixed slacks. So the first month of February of every year for discrete models on the news models. And then random lacks any where between January and December for the script lags and for our random lacked for the discrete and continuous. And what we see is that all models, both approaches can recover the parameters very well with, with very, very little relax your bias. And they exception is the correlation between the intercept and the slope. For the size 343 is two time points one year apart, or is three time points in consecutive years for this or that. But other than that, for the, especially the continuous time model, show almost no bias at all. Now there is variability, a lot of variability in the relative bias. And, and the rates of coverage are very, very good. Except for their, for their, the residual variance. When we use a discrete model. So that is the case. That is the case when there is convergence among the cohorts. In other words, when, when there is equivalence among the course, when we can assume that the individuals who are older belong aware, generated the same mechanism as the younger individuals who, in, which in reality, what we mean is what we were saying at the beginning. If we could follow the younger individuals on when until they are old, they would look like these ones. And if we could have measured in the older individuals, when they were younger, they would not like that. Now, we also simulated data and when that was not the case, there was no equivalence. And when, when we looked at the results for the non convergent or conditions, thereby, thereby sets are all over the place. All over the place. With that, I would say that the buyer some more apparent in most of the parameters. And that seems to be the case for both our discrete time and continuous. Are the rates of, of, of convergence a drop down a little bit as well. Though. So let me summarize so far. Unless somebody has a question and a question. Yes. In what sense are there? You're not converge it folks non converge or does it just converge? They're going to develop to a different level? Or are they going to kind of develop there faster or slower or like Thank you. How are they different from one another? Thank you. Thank you. So so the question is, how are these different, right? Well, we generated the data with a given model. So they'd have to be different lengths, some aspect of the model. And at the time, we only knew two ways in which they could different. One was based on the initial level, the starting point, and the other one was the rate at which they asymptote at which there were there were Approach. Those are the two conditions in which these individuals were no non-convergence. Now we've learned a little bit more and we've learned how to, how to include non-convergence and other components of the model and that, that are we about to send back for publication. But but thus, that was does what I just described is what it was yes. Here. Thank you. For your marriage. Questions. Certainly certainly. On the preceding slide, I realize it's just a simulation. But in real data to use SHA1. Third type of homogeneity of variance. Your data that you simulated eye gets most of the FIC kinda get pinched down on the ends. Would that affect your model? Yes. Yes. So the question is, is this variability that I'm seeing in the data, does that resemble create data? And the answer is yes. In fact, we did the conditions for this simulation. The parameters that we use for this simulation was based on real data, were based, were based on on let me go back here very quickly. We're based on this data on data of reading across a similar time span with a, with a representative sample of the US individuals at the time. But that, those data, to me, maybe it's just my Irish folding may look like the variability for older kids. It's less than that for younger kids. Whereas in simulated data look pretty constant throughout. In the resulting it the resulting data. Well, it depends it depends on the on the contract. I my experience is that for most cognitive abilities that I've been dealing with, there is less variability at early ages. They cannot vary that much. And there's a divergence of r over time. Are so, so yes, these are simulated data we used in order to inform our simulation, we use parameters are extracted from real data. So I wouldn't, I wouldn't say that. And realistic at all. Of course, we had to vary if we had to introduce source of variance because we are simulated 500 replications are at each location as well. So, and this is just one of them. So it's possible that is represent and also that sampling variability that we observe. But I would say in general that these are not realistic, are our characteristics of the data. I would say. Perhaps in this case, I think the variability in the initial level is Celera debt too high. Perhaps also 0 in this case is because we were including that as a source of differences between between people. Am I am I am I answering your question? Yes. I guess the one thing I'm still unsure about is, does your model make any assumptions about the homogeneity of the data across your timespan? No. No. No, no. No, we're not. Okay. We are map. Thank you. Yep. You're welcome. Okay. So just a couple of a couple of main points here of findings that the population trajectory can be adequately estimate it with our measure in all cases at all ages. So this is from a standpoint of the sign, can be very informative, right? So if you have five years of funding and you want to measure something that covers a larger span. So we can use this information to, to, to, to help with the design. Now, if there is convergence among the cohorts, both types of models, discrete and continuous perform well. But the continuous model performed better than the discrete in all cases. If there was no convergence among the cohorts of both types of models show bias in the parameters. And now modeling the convergence is a little bit tricky because it requires knowing where they non-convergence is coming from. Any requires the right model. So in other words, in other words, he from facing data analysis of these type of data. In I don't know if there's convergence or, or, or not. Well, first of all, I need to use the right model in which I almost never know. And second, I need to have some guesses about with the non-convergence can come from in order to accommodate or another tool to test that. And third, we did like this. There are no, there, there, there, there are really no cohort is now that we have 600 seven up here. Everybody is measured at different ages. So you can think of it as each individual be his or her own cohort. Okay. So i ae we have, we go until one, right? So let me see how much I can say about, about this. Because this idea of testing for, for convergence is a little bit tricky though. What we wanted to now is in, in the model that we use for our four approximating. In the discrete, discrete type. For approximating this data, we use age bins. But we thought maybe I'd stability. And I'm going to skip some analysis gave somebody is interested in this, I'm going to explain very briefly, somebody's interested, I'm happy to talk about these. So these are our analysis using the models that I explain to our to examine data on fluid reasoning and potential effects of covariates related to brain variables. So I'm just going to go quickly. I'm I'm sorry because we really don't have time, but I want to talk about something else that is more related. So if somebody's interested, we can talk about that. The idea is, we thought maybe instead of having national may location, I'm sorry, measurement location a to beans are stay on the line. Metric are. First of all, we don't have a whole lot of data on each of the potential bins. And true, the continuous time model is the most natural way to pursue this. But most researchers are not familiar with continuous time models. Most researchers, I think it's fair to say that my now, my feel intimidated perhaps are, are, are when trying to implement those models in the software that is available. So we were thinking, is it possible that we can capture those trajectories they had generated based on lambda lying age metric. Is it possible that we can recover them using a model that is based on measurement occasions. So here this is the same latent change is called model. But now we have time one, time two, times three. So we have a discrete set of measurements and we can introduce age. Asset co-vary had a transect covariate. And when we do that, when we do that, this is what we obtain. So big gray in the background, della gray lines, those are the real data. The blue line is that the function, the exponential function, period to that data. And then the black lines with the red, green, and blue dots. Those are approximations or predictions from this model based on first-time point, bread, second, barn, green, and blue time point. So if an individual enters at five, based on this model, the predictions are that, that this person is going to be here. If a person enters at 16, these are the three predicted values. And that is the prediction when we enter age as linear acidemia covariate on it. However, when we enter it or say quadratic component, we are carrying a little bit more of that curvature. That approximates the function a little bit better. And when we enter a square root is getting a little bit better. So what we did is we tried or a number of different trajectories of the exponential function. We enter tried, we tried to approximate them. Are you send a linear eight to the, to the, to the, to the, to the one over e to the power of half h square h exponential for a number of different trajectories. And if the true trajectory has linear, of course, a linear approximation is enough. But when, if the true trajectory is highly exponential, highly curvilinear, a linear approximation is not going to do it. But, but many other types of non-linear approximations, we capture the trajectory. Okay? So I'm going to, I'm going to do, to conclude, I want to give a couple of general summary, find this and hopefully we'll have some interesting questions. One, these types of models, late insurance is called morals are helpful to characterize the non-linear developmental changes. In our particular case was fluid reasoning from childhood to adolescence, you seen these type of data is also useful to examine the contribution of other covariates. In our case, we had data from brain structure and function a and function. Another general point is that the methodology needs to shift from descriptive mechanistic models of dynamics if we are interested in understanding the underlying mechanisms, It's something that I said that I'm going to repeat. The population trajectory can be estimated adequately without measuring all cases at all ages. If there is equivalence among all possible cohorts in the data. But if there's no convergence, these types of models show bias in, in, in, in the parameters. And finally, just a couple of things about this idea of non-convergence based on, on some new results that we have. And it's the idea that examine cohort equivalence is not as straightforward as I said, we require, it requires knowing the possible sources in the right model. And that's, is everybody's guess. Our, whilst you come up with your ink is the right model, then the bias might come AS sources. The different parameters of the model. In our case in the late they change the score model. We've looked at possible sources due to the initial intercept, the initial condition, and the asymptote. Or they are the additive component. And what we, what we found in, in, in recent simulations is that both the discrete and continuous time models, they can estimate cohort effects very well. In other words, we haven't model and we have a parameter that accounts for differences among cohorts in the intercept and for differences among cohorts and the asymptote. And when they cohort. Cohort differences exist in the data. By including these parameters, we can recover the original estimates are very well. But if their cohorts in the data and we do not include the, if we do not include the parameters that are presumed to account for such differences, are, we are getting biased results. And those results are only tenable when they differentiate. Some in the cohorts are very, very small, very, very small. Okay, so I think he said, all I am going to say because this is 55, I just want to know are a lot of people who have helped with this, mainly Eduardo strata, but also I can have my army, Sylvia, move her foot. She is here, my mirror, Chris Whitaker. And Joel is still in a number of sources of funding. And thank you, everybody for listening, and I'm happy to try to answer questions that you might have. Thanks so much for that talk. I could die. There were some questions about let someone else dive in first, if, if one is ready. This may be a question asked out of pure ignorance. But would you be able to use these models to estimate some change or construct that, that does not have a monotonically increasing sort of change over time. For example, like you. Congratulations and then how satisfied you are with your job comes to mind or how engaged. It's not something that is consistently increasing but will fluctuate over time. Yeah, yeah. The answer is that these are not the best models for that. These models are meant to examine changes that are systematic over time and dump my show individual variants in those changes linen that everybody's interest and the same with it. There is a general trend. The types of changes that you're talking about, the fluctuation of the ups and downs are I think for those, there are other models that get my, my mass that might be much better, such as a time series model, for example. Worst, worst in some of those cases, sometimes they mean is that the most important aspect is just the ups and downs and the time dependency in dose. So, so I think I will look into those types of models are supposed to the ones that I presented. If I understood your question correctly, chris miller, what they I was curious about from the first half of the talk was, what's the problem with the discrete time models with fixed or with the random lags. It looks like in that first simulation and everything was working good except for that, that one case. Yeah, we are. Yeah, this is the idea is that piece, these g3, G3 is, is, is, is given a little bit of our problems. And and I think is, is mostly, is, is mostly a look at the results. Is mostly in their, in their correlation. But it's also in the, in the correlation of the intercept and slope. But we will also find in summing the residual variance. I'm not really sure what the source of that case to tell you to tell you the truth. I'm not really sure what the sources bed, but all I can say is that introducing that jittery is ASR. Making the estimation a little bit more difficult. Just because some of it just like the misspecification of the wags getting pushed into measurement noise. Is that part of it? Yeah. That that's what we thought. Okay. That's what we thought. Shall we go? Not just in the measurement noise, it's showing up elsewhere to know what's driving that. Yeah. Yeah. Yeah. I haven't um, I think we talked about this a little bit in the paper. I just I just forgot that was two years ago. That seems like an eternity. Barriers. That was pre pandemic variables. That introduction of the, of the variability in there, which in reality is, is a Linux more reality of course than that done the effects. Yeah, the other one sort of everyone measured on your birthday. That's right. That's right. That's right. Which makes for an unpleasant birthday. Exactly. More questions. Will fall if my guy, if people would like to, let me see, I have the papers here. If people are interested in in these, in these papers, I'm I'm happy to distribute them, not to make them available, but the Southern papers from which they're dead. The information from the personal picture. That's great. Yeah. Well, we are about up on time and I want to be conscious of other folks are schedules and our presenters schedule. So thanks again, ALL I really enjoyed this talk. I hope others did as well. But thank you so much for having me and I believe I'm meeting with the students now. Oh, great.