Do I today's Brown Bag. Recording has started. Welcome today's brown bag everyone. So it's my great pleasure to introduce somebody that there were very many of you all know. Ashok Goel is a professor of computer science in Human-Centered Computing in the School of Interactive Computing at Georgetown. He plays a number of other roles also, he's the Chief Scientist with Georgia Tech's Center for 21st century universities. He's the executive director of NSF, National Institute for adult learning and online education, which we'll be hearing about today. I should say this is a very, very, very large and very prestigious effort that the NSF has, has supported. It's wonderful to hear about this. He's a fellow of Triple AI and cognitive science aside the error, integer Emeritus of Tripoli eyes AI magazine and a recipient of AAA eyes outstanding AI Educator Award. And we will hear about this new national AI Institute today. And let me just remind you if you have score, should be sure to post those in the chat or the Q and a, and we will get to those at the end of the talk. And with that, let's welcome. Thank you very much, Kate. Thank you very much for the invitation. I'm really looking forward to this dialogue today. Today I will talk it out sort of high level, big picture talk, not getting the technical details about the vision goals and plans for this new national AI Research Institute for adult earning an a on an education about which we are very, very excited. So let me begin with a vision. So in 2019, the World Economic Forum projected than in the United States alone, something like 100 and 33 million new jobs will be created by 2030. And what it meant was that this will be the number of people who might be having jobs, but might be looking for new jobs because of changes going on due to automation. This was in 2019 before COVID 19 occurred. Last year alone in 2021, approximately 25 million Americans left their jobs is so-called create resignation. November 2021 alone, about 4 million Americans left their jobs. What do what happens to all of his work? How do we help them? Reskill, upskill. Now, this adult learners have very unique circumstances. Many of them have families, many of them have had jobs, are still maybe having jobs, perhaps part-time jobs. Though. It's not your typical population that you can bring to a college. Instead, we have to take college to them. Hence, online education. And what women, but adult learning here, adults and particularism. They are really talking about people who are maybe 24 years or older. The 24 to anywhere up to 60, 70, 80. The technologies and theories we're going to build will be useful for other adults also from 18 to 24 years old, your college, typical college going h. But it will focus is on the older. And when we talk about older adults, then there are additional factors that come into play. Cognitive factors, social factors, cultural factors, emotional factors. You just take a few examples. When we are in K 12, then a lot of learning is teacher diet. But for older adults, a lot of learning that they're interested in is typically self directed. For K2, most of the learning has journal. Well older adults, we're interested in skill learning. For K2, most of the learning has to do with very well-defined parlors, but it's our problems in arithmetic or algebra. For older adults, the problems are often open-ended. But that raises several challenges. Not only the challenge of a very large number of workers, not only the challenge that they're adults and we are talking about online learning, but also the challenge that they're interested in. Things quite different from the typical student ID K2, or even an undergraduate college. And that brings us to the vision of our AI alone scheduled. Foundationally, our research did a unique circumstances and specific signs of adult learning and on an education to transform the American workforce. And that's a big jump. Now to do this, we're going to take a socio-technical system approach in which humans and AI agents will be working together, living together, learning together. And the important thing to dissociate technical systems approaches. The focus is not so much on optimizing AI algorithms. We just sort of typical traditionally a research focus really is on the humans and optimizing the system as a whole according to the human values and human goals. So here a is very much in the service of humans. Now when we're talking about humans, we're interested in things like accessibility, personalisation, scalability. Interestingly enough, is same kind of issues are arising and AIDs, aids. In AI tool. We want AI agents to be personalised. We want them to be scalable, and we want them to be accessible to everyone. Then some way the problem of education and AI are converging. Because this also raises the question about what kind of impact do we want to have and how would we measure that impact? But typically when we talk about impact of measurement, we talk about D, the items that are on my screen on the left, learning efficiency, learning effectiveness, engagement, cognitive engagement. We can measure learning efficiency, for example, by time to get to a certain degree of competency. Learning effectiveness, for example, whether the, what is being learned can be transferred to new problem. On the other hand, we equally entrusted on the three measures on the right-hand side of this pillar, availability, affordability, and equity. By going out on an education, we're hoping that we can make it really our level two very large number of people and make it affordable, which we hope expect, intend to make it also make it more equitable. In between, there is a recognition that learning is not just a cognitive process. Learning is fundamentally a social process with very strong, effective and emotional aspects. And I'll talk a little bit about as I go around. Learning is at the same time postal nice in the sense that we want to use AI to your personal feedback. And of course we want to do it at very large scales because the scale of the problem that we're talking about is very large. But I hope that gives you some sense of the vision of the new AI Alinsky. Let me gradually transform, translate to transition to goals. Now that we have some understanding of division. But they are both use inspired goals and foundational goals. This is the International Institute, so a lot of focus is on AI itself. But we're talking here about AI, which is both wonderful advances, advances in AI, which I'll come and a little bit later. As well as use inspired located in education, those kinds of advances in AI. And we're going to talk both about technological and methodological contributions. So what I'm going to do for the rest of this presentation, for much of the rest of the presentation is to walk you through many of the advances that we want to make both in use in Spark and foundational AI. Now. And you use inspired side. A lot of the theory making is coming from the community of inquiry framework, which has identified several major problems that go on an online education. And how do we make the quality of an education on par with in-person education? How do we make it superior to in-person education? And we know there are problems there. The cognitive presence, online learning, online education is not often as engaging as in-person education as teacher presents access, teacher is limited and social presence it is social isolation. Social interactions really limit, and that will lead to additional considerations. Let's first talk about to use inspired AI. And that will set up goals and issues for the foundational AI. In this particular case, the use of air and education is going to motivate why we want to address certain problems and fundamental foundation. But let me begin with cognitive presence and online education. Often there are no access to physical labs and in-person education, you can go to a physics lab, chemistry lab, biology lab, and so on. But in online education there is no such access. Which of course puts the online education, online learning at a major disadvantage. If you're teaching something and manufacturing, for example. And the online students can or does not have access to any manufacturing lab. And that is a problem. Well, one way of doing that, to make these labs, at least to some degree, perhaps to a large degree. This to create virtual labs. And this is the work of sun again. And who is a PhD student here at Georgia Tech in Human-Centered Computing. Though, there is a virtual ecological research assistant. Actually we now call it virtual experimental research assistant because it works in domains. And using this virtual lab, you can do data-driven, evidence-based, scientific thinking, our system. But in this particular case is ecological systems. As an example, there are sea turtles, beautiful sea turtles on the coast of Georgia. And as the temperature rises because of climate change, what happens to the sea turtles? They imagine that someone in a self directed learning manner is interested in asking that question. As he or she does it. They can start building a conceptual model on the left and then similarly get conceptual model on the right. Now I'm not showing you the entire process here. Point here is that enables systems thinking and must the way in which scientists do it. But this is really arising from cognitive theories of how scientists think about complex systems. But now we're embedding those theories into real learning environments. And scientists identify some phenomenon. For example, what will happen to the sea turtles if the temperature rises by a degree and they generate hypotheses. But maybe the gender balance in the population of sea turtles will change as the temperature rises. Then they create a model that elaborates on that hypothesis. And that a value we get model revise that models this sort of typical process, time to be thinking process. So where are our Virtual Lab enables a self-directed learner to engage in the same kind of scientific process. Now what we have done is to connect Vera with Encyclopedia of Life, which is, which was Smithsonian Institution. And that's partially to work on Dr. Jennifer Hamer, who was the director of oil at Smithsonian. And this way we provide access to a very large amount of domain knowledge. To now the learner not only has access to this on a lab, but food is on our lab, has access to a large amount of domain knowledge, the world's largest digital library, a biologic. What we have found where we introduced in process. This is Dr. Emily. We go in the School of Biological Sciences in her classes in biology and ecology. And what we found here is that because we provide this access to be able to both generate conceptual models and evaluate them and ended data-driven, evidence-based manner. And we can provide access to large amounts of domain knowledge. Nobody can get a contextualized manner. So it's not just like reading Encyclopedia, but you get access to the knowledge that you need it when you need it. In the context in which you needed, leads to learners to create deeper, richer, more complex. In fact, by some measures or creative mode. But so far if you are going to talk about a classroom, it's still a pedagogical account for. We also put radar. We have provided access to Vera, to Encyclopedia of Life. Now thousands of students across the world are using Vieira, who the Encyclopedia of Life on their own. Sorry, I went too fast it to the next slide. There is a challenge here because these are run our distributed across the world. We don't know what your learning goals are. We don't know what the learning assessments or there are no learning assessments. How then do we measure the quality of learning? And how then do we, they, what kind of cognitive scaffolds should we provide to start that we'll learn when we don't know anything about their learning goals or outcomes or assessments. And we don't even know much about your learning demographics. This is an example of adult learning and online education. You don't know who's going to use your tools, your technology, your theories, but you still need to be able to provide the right kind of co-operative scaffolds and make sense of how they are actually using, where this is an example of someone is using a model someone has constructed and assimilation someone has constructed. But what we're doing is to look at all the sequence of actions they have taken. And sad again, is now using Markov Chain models, among other models to make sense of this sequence of actions and classifying these learners into different groups, trying to understand the learning behaviors so that for each learning behavior, we can provide the right kind of scaffolds, the right kind of feedback. I hope that gives you some idea about what we're talking about when we're talking about subtracted learning here. For the time being, I'm good. Move on to teacher presence. So this is the work of David Joiner, whom I expect many of you already know. In this audience achieve you term. We can create video lessons for online learning. And these video lessons can go on for hours and hours. And we all know this video lessons are not very, not all of them are very engaging. If you're going to make this Adult Education and Online Learning on par with in-person education. Then we helped to provide additional structures and scaffolds and these video lessons to make them meaningful to the learners. To now after every particular lesson. In a video lesson, there is a small exercise, which of course is a heuristic that many people have been using. But we have gone beyond that behind every exercise now there is a small intelligent tutor. So as a student is watching a video lesson, learns a particular skill. He or she comes across our exercise and start addressing this exercise as the student gives answers, in this case, an incorrect answer. And we don't need to worry about the exact nature of the exercise. But as a student gives an incorrect answer, the entirely intuitive behind it is able to provide an explanation. Until the student reaches the kinda count. So then the agent can provide an answer about what an expression for y, this is the correct answer. So this is Learning to mastery learning group, but it could be a competence. Now how did we do it given that this is an open-ended problem and not a closed world problem like, let's say arithmetic or algebra of the kind that we do in Keto. But this is the way this works. We make an inventory of concepts that we want learners to learn in a particular class. For every concept, we make an inventory of the typical misconceptions that students have. And most teachers know both the concept they want to teach, as well as the kind of typical and exceptions. And so we built this on exercises and then we get multiple question choices for students to pick from them. And students pick different choices. We map them behind the scene into one of the particular kinds of misconceptions that we already know about, because we know what the typical misconceptions are and what each misconception is an explanation that we've already pre-compiled and the agent is just giving that explanation. So this is an example of taking teacher presence and killing it up to, in a particular class, there are 100 such tutors. The scaling it up to very large scales. And this particular crosshairs by now been taken by, we estimate about eight to 9 thousand students, though a 100 tutors and a single cause, a 29 thousand students have taken it. Which is an example of teacher presence, enhancing the teacher presence in this online class. And my colleague, Chihuahua, our Center for Teaching Enhancement, has done a lot of evaluation of that and found that this in fact is very effective. Let me move on to social interaction. And many people here either have podcasts with online or have different classes online. And we know that there's a lack of social interaction that can occur online. A physical classroom. You can look at your neighbor and you can talk to him or her. That's much harder during the social class, especially if the class is lot 500 students or more. We don't know anyone. And learning being a social process, what do we do about it? So here's one way in which AI can help. We ask students to introduce themselves. And as the students introduce themselves. Then an agent, Tammy, both social agent mediated instruction, a play on vygotsky's theories, is reading what does responses and giving answers. And the work of research scientist named Ida Camacho. But not only is it giving responses. More interestingly, that is also building links between students. And the links could depend upon, for example, time zones or hobbies or interests. Or if you're talking about a 40-year-old adults, perhaps there's an adult learner who's, has a child who goes to elementary school and he or she may be interested in meeting other learners who are in a similar kind of social contexts. Children are going to elementary schools. This could be based on, on chess-playing hobbies or other kind of Hobbes. The point is, it is a social matching process that goes on. And we have found that a simple kind of social and matching goes a long way towards giving students the ice breaking way to begin to talk to their online classmates. I'm now going to move on and talk about feedback loops. Though. I have talked about cognitive presence, teacher presence, social presence. But what we really want to do is not only have this EHR systems, but think about how they can live with teachers and learners together. To if you think about going from left to right as an online teacher comes and he or she already has some ideas about theories of human cognition and learning and what are the constraints and affordances of online learning. And C, learners learn. But we're going to collect a lot of data from down for the level of click stream to a high skill level about assessments. And that data has been collected and we want to mine the data, the learning progressions and learning trajectories so that we can identify learners who are having specific kinds of difficulties with specific kinds of concepts. Back then, we want to feed to both teachers and learners, as well as the assistant to the EHR systems. And I've talked about so far what not getting this information. But as we start getting this information, then that should lead this air agents to personalize their interactions with the people, which I'll show you a little bit in a minute once I shift to foundational AI. But this is also going to be useful to teachers. Teachers, often including me. As a teacher. The amount of information that I have available about my students learning is actually very limited. If only I had more in funding. In fact, one of the big wins of online education is that we have access to data. Data, data. Can we not use this data in order to improve the quality of online learning? And of course, why do it only for pedagogical situations. We can also do it for self-directed learning. So here there are no teachers. This is an example where someone might be using Vera and a self-directed manner. So there is only Vera and your learner AS system, so the learners, but there's no teacher in the loop here. And a lot of learning for adults is of that kind. The way you and I learn a lot of old learning, self-directed. And anything can happen where we're collecting data about people's running and then feeding them, in this case, directly into AI agents, as well as providing access to some of that to learners. We hope to collect data from a few million data learners. And this will be the largest data set. And about 45 years of adult learners in the United States, in the public domain, numb their industrial companies which have larger datasets, but that's proprietary information. But when you say in the public domain, I do not mean that you will provide access to all of this data, to everyone. It will build a kind of tiered system because they are very important issues of privacy. There will be a tiered system and only a very small number of core researchers will have access to student information. And then next year we'll have only anonymize data. And these tiers, as we go along further and further out, anywhere in the learning sense, community can send that algorithms. It'll run them on our data. We will send only the results of those running this algorithm techniques to the outside investigators. So that we mentioned student privacy and issues of that kind, which of course absolutely a part of responsible AI. Okay, so I hope I have set up some issues that have to do with US inspired AI, how AI can have some impact on education. And based on our past experience with deploying AI is like this. We have found that in fact the quality of learning we have found is to quasi-experimental studies that equality of running in his online classes is comparable to that in the residential part. But in order to really scale it up, we have to now address some foundational issues in AI. Issues for which AI right now has either has only partial answers. But let me begin by talking about a personalized feedback. Personalized feedback at large-scale night, I expect some of you already know, but you'll Watson. Watson was an automated virtual teaching, a system that we built up a few years back. It can automatically answer some subset of cushions as students us an online discussion forums. Most of discussions I have to do with logistics will just I will have office hours in the semester, for example. But we have it now. Moved on to a new version. So Jill Watson and that can answer questions about content. So as an example, I talk to you about Vera, the virtual ecological research assistant. It comes with a reference manual. One actually reach the reference manual, right? You and I don't tweet reference mammals. But now Jill Watson and knows enough about the Reference Manual that as a user asks a question, Jill Watson can immediately give an answer based on that reference. And here's an example. What is a primary producer? Until Watson find an answer to what is a primary producer? This is a content-based answer, no longer a question just about class logistics. Which raises the question, if we can make a Jill Watson Photo Reference Manual, can we not make a Jill Watson for a textbook? And it's exactly what we're trying to do now. We have interactive book for for a particular cause. And we're not trying to belittle Watson that can answer questions based initially for just one chapter of that book. But notice how this can lead, can lead to personalization. If Jill Watson can answer questions about your content, about concepts, then as a student asks, for those with the concept, ask the same kind of cushion or different cushions over the same concept. Then Jill Watson can alert the teacher. Or Jill Watson can change it sounds. And that's what leads to personalization. We have some evidence of this happening to initially as people interact with Jill Watson, an online discussion forum. Do you think that Jill Watson is very intelligent? Over time? Just start thinking that you're Watson is not very intelligent. There is a dip that you can see here, the middle, a red arrow is capturing students perceptions of Jill Watson's intelligence. Initially is very high. In the middle, it becomes lower. Or they think that you're Watson is not intelligent. And towards the end of the semester it's sort of reaches a happy medium. But what this means is that a student's perception are changing here, the way they are addressing. Your Watson also changes. They start using different set of linguistic act out and says, well that's nice because that nice until Watson can look at this linguistic utterances and quickly make out how students are perceiving it as exactly what I meant by personalization. If you're Watson can figure out how different students are perceiving it from the cushions they asking it, or how you're struggling with a concept based on the cushions, then she can tailor her answers. This gets me to the next topic about mutual theory of mind. And this is the work of Chelsea, one who is building out, hurry up. Communication between humans and AI. To the theory of mind is social cognitive construct where two people, humans, are communicating with each other. And each of the two people, or more than two people, has a Curie of the other person's mind. We can ascribe goals and intentions, knowledge, belief to others. You and I do that all the time. You and I are doing it right now. But it's not only that I have maturity of your mind and you have a theory of my mind. But it's also the case that I have a theory of your theory of mind, mind that this is a recursive thing. Unfortunately, Humans and AI, I do not have such communication. Humans typically do not have a theory of mind. And as typically don't have a theory of humans mind. And AI agents certainly do not have a theory of humans theory of mind, or in the other data. The recursion now is that can we understand the communication breakdown that occurs between humans to this theory of mind? Yes, that's what theory of mind for whites, it grow. It's a lens for understanding how communication occurs and a communication breakdown occurs. So for example, a person may say something, and person B may say, well, based on what she said, my mental model of Horace. And then person B might give some feedback. And person might say, Well, he really doesn't understand me. We want to do the same kind of thing, or Humans and AI. So now instead of two people, girders are human being and an EHR system. And the human being taste something to the EHR system, RAS system make some inferences based on humans digital footprint. And based on that, builds a mental model of the human being and then makes a recommendation. For example, in case of Sammy, you might want to lipid, connect to you with a few people who have similar interests. And a human might say, Well, that's not me. This is incorrect. Human beings, joseph feed back to the agent and the agent we expect will be able to use that feedback and correct. So this is an example of using mutual theory of mind to enable enhanced communication between humans and AI. And we can get this might result in perhaps new work on human AI interaction. Okay, so I'll move on from now. Another problem that occurs when we talk about AI as I should be with machine teaching. And here's the problem. We can virtual Walkman or a Samuel Rivera. But actually constructing these agents take a long amount of time to Supposing I were to give Jill Watson to a teacher and say, college teacher or an adult learning teacher or lifelong teacher for lifelong learning. And say you can use Jill Watson, would it make it 50 hours to build it but Washington for that particular class. And if you're Watson seems 100 hours answering cushions, 5200 is not necessarily a very good ratio up and feel. And no one wants to make a 50 hours of investment of time on right up front. What do we do to make this time torture? We want to be able to make this time much shorter, not 50 off, but phi was two hours. So then anyone can better deal Watson for his or her class in two hours. So this is how Watson works. I'm not going to go to do deals, but cushions come in, they get classified. And there is a domain knowledge base and Jill Watson searches then the domain knowledge base and Tourette's or response and gives a response to the user from left to right, and then in the top row and the department road. But what we're doing now is that there is machine teaching interface that we call Agent Smith. And a human sits in front of this interface. There already is a knowledge base that this might be a teacher. The teacher has the knowledge base. And there was a typology of cushions. And for each cushion the teacher says, due to this answer, but this cushion, I want to give this answer because there is a typology of Persians. The teacher just select some questions and some answers. And AI does the rest. It builds a large number of cushions based on those cushion templates, large number of answers based on the selections made by the teachers, which can be done in a few hours. And then AI cranes, Jill Watson agents, which Quince to Watson on its own. And Jill Watson ends up giving as good a performance using this machine teaching as it does normal. To then form, correspond to a foundationally I, three major advances on the x-axis of this diagram. Tough. Learning, moving away from well-defined problems to open-ended problems of systems thinking, scientific thinking as an example. On the y-axis, going to mutual theory of mind, much better communication between AI and humans. And on the z-axis going into this screen is machine teaching. We're the first generation were handcrafted knowledge and now the new generation, this generation is going to be human, top AIG. Which I think should interests each view community a lot because we are really talking a lot about human AI. Introduction your machine teaching and mutual theory of mind are both about human AI interact. Of course, there are huge issues of responsibility here. Where am I going to collect such amounts of data and train a agents and such data? They're going to be biases. And we understand these biases are many different kinds and occur all throughout this process. So there is no simple solution to identifying ethical problems and biases that occur. But we are building all of oleic technologies to large-scale participatory design. That means we involve all stakeholders, learners, teachers, administrators, in some cases, our partners of the learners, because they all play a role in building these technologies. So that if there are unethical problems or bias problems, we can identify them early and not leave them off or after the fact. This is in fact a major part of socio-technical system as well. And then we, Visual Analytics is a major part of it. Because after we have done all the analysis and we have presented the data, we wouldn't beings can still have biases, implicit biases. But now there are visual analytics techniques, what I'm trying to address some of those cognitive biases, including implicit biases. Okay, so let me go move on to plants. So plan is what you would expect for design-based research. The usual cycle that you expect for. In learning sciences. Initially you do design, then you do interventions, and then you do formative assessment, collect data, and the cycle repeats itself. We will change on small amounts of words here. Instead of just thinking in terms of design and development and deployment IT intervention, then analysis and assessment and feeding it back. And we can also think of this in terms of foundational AI research, use inspired research and learning analytics. So we're defining cycles in terms of one-year cycles. The first semester is design and development. Second semester, this deployment, an intervention. The third semester is analysis, assessment and feeding it back. In a five-year period. We think we can do about 13 to 15 cycles depending on how you count a year. And each cycle. It's the same sort of iterative process. Now of course, this works very well for some kind of intervention. This does not necessarily work for this particular notion of one-year cycle. Doesn't necessarily work for everything that we're interested in. Some things require more than a one-year cycle. So I'm just shitting on the sort of the basic unit. But we recognize that, for example, that mutual theory of mind, a machine teaching is not going to get done in a wonder, good novel subtract regarding get an event. In the first cycle. A ILO was launched in November, on November 1st, 2021. Last fall was the first cycle. And we picked four design elements, educational contexts, coaches and instructors, AI technologies. And we started building a technology infrastructure for the feedback loop that I was showing you earlier. And of course, learning assessment. And the important thing here is that we're thinking of these four elements as sort of interlocking gears. But typically when people develop AI technologies and develop AI technologies. But if you're gonna do Olympiad technologies to enable effective learning, efficient learning for adults, then all of these things have to connect together and to work in conjunction with each other. And that of course, becomes more challenge. Now in the first cycle, we introduced some of these technologies to algae technologies in three classes that the Technical College System of Georgia, the Technical College System of Georgia, PCAST, has more than 300 thousand students. These are typically two-year programs, are on tents, for example, like manufacturing or nursing. Just to take two examples, but all kinds of disciplines are there. And part of the reason for going to GCSE rather than a place like Georgia Tech is because the demographics there is one that often does not get as much attention and AI are computing or other high technologies. As a disaster. So by starting there, we want to make sure that the learning technologies and models that were developing are useful, not only for adult education, for online learners are the kind that are at Georgia Tech, but also for all adult learners. And this is a serious issue in, goes back to the issue of fairness and bias and equity. Menu for learning technologies and Learning models do not necessarily generalize to different kinds of educational contents. By starting at varied educational contexts. We are trying to make sure that we're learning technology, learning models and learning theories in fact, are generalizable across large number of educational goal. Now we, as I've mentioned, these are sort of one-year cycles. And so the first cycle started in fall of 2021. We're in the second phase of that cycle. Each cycle has three phases. The first cycle design, a second cycle deployment cycle analysis assessment. So within the second phase of the cycle, but we're also in the first quasar, the second cycle. So the second cycle started in January 2022. And there we have introduced four more elements. Now we're not only doing top-left, we're also doing a four additional elements on the top right, learning analytics, Interaction Design, participated design for standardization at scale. Of course, this is all at an early stage. We haven't, for example, ready, done a lot, got an interaction designer party separately designed, but that process has started. And again, the problem is the same. We can attend to any one of them, Party superior design and personalization at scale. Very well, we know how to do those synthase. These things have been done on for some time. Some of them. How do you do all of them together? How do you do them in such a way that one is feeding into the other? That participated design feeds into EHR technologies. That learning assessment feeds into Lundberg, feeds into Interaction Design and inform each other. And as we go along, we'll add additional things, for example, social and emotional processes. And now very soon this is going to become a very complex machinery. So in some sense at this low, in a form, we're going for a moonshot or whether we succeed or fail or with partially succeed. And this a moonshot in the sense that a large number of things have to work in conjunction dental work. Well, now I mentioned that we already started introducing thing set Technical College System of Georgia. Some technologies already running in the summer. This will go to Georgia Tech where we will continually PCAST. We're not going to abandon UCSD, but we'll introduce it and more and more classes will also start at Georgia Tech. And four, we're going to start at Georgia State University. And come next year we're going to start at Wiley, Boeing, and IBM, which are three of our industrial partners. And so each one of them has access to hundreds of thousands of learners. And that's how we hope to get to 2.5 to 3 million learners over the five-year duration. Because we have industrial partners who want to use our technologies and more importantly, our technology infrastructure for that feedback loop that we are building. They want to use that. And so we have access to all of these learners. You'll be collecting data as I, as I tried to indicate. Of course we have to do a large amount of assessment and it is a different kind of assessments. All the different kinds of assessments that you weren't expecting. I don't go through the details here. A formative and summative efficiency and effectiveness. And not only learning efficiency, learning effectiveness with teaching efficiency and teaching effectiveness, which are equally important, or not just four genres, but also for teachers. Now here's an example, an incomplete table of the kind of learning centric assessments we are talking about. The learning retention, cognitive engagement, social interactions, learning behaviours, running efficiency. This table goes on. I just pulled five rows of it to give me an indication of the kinds of things that we're thinking about. And for each one of them, there is a notion of what a measured variable is. What you get are sources are how we're going to collect that data and analyze that data. This table, it's a long cable as you would expect it to be. But in addition to learning assessment, there's the issue of program evaluation. So we have enough stored program evaluator who's going to evaluate this from many different press director of science, science of learning, science of cognition, signs of GI, science of computing, science of you. On, but also from a pedagogical perspective. Is the learning being efficient, effective, both for learners and for teachers. And then of course, they are a DEI, diversity, equity and inclusion that I indicated earlier. I'll talk just a little bit. I'm coming towards the end of the talk so that we have some time for question answering. I love to hear from you. Um, so but let me just talk a little bit about your organization. We have a dozen partners, Georgia research alliances, the prime on this particular branch. But we have partners from educational institutions like Harvard, Drexel, Arizona State, Georgia Tech, Georgia State University, and University of North Carolina, Greensboro. We have industrial partners going I wildly IBM, as well as essential now. And we also have non-profit organizations like IMS Global, such a line. But this is a really large consortium. Here are 25 core researchers on duty. There, approximately, roughly this, 1 third and AI 1 third and Learning Technology in Education. Now these numbers are approximate. Some of these people in learning technology, for example, could be in education or under AI, we can move these people around to some degree. The boundaries are fuzzy boundaries and not go through all the names. But it gives you an idea that we're really bringing together a community of researchers, very interdisciplinary community, all the way from Education and Psychology and public policy to AI and machine learning and data science and a lot of things in between. And one of the opportunities here is doing transdisciplinary research. Because one of the tens here is not just about building juries and topologies. What are thinking about new kinds of methodologies that come together? When you have 25 researchers from very different disciplines and very different perspectives. Welcome to data. This is the Student Leadership. My colleague, Mike honesty administratively, he's from Georgia, is that aligns. And my colleague, Chihuahua from Georgia Research Alliance and Georgia Tech as a project manager for that. And we have an executive committee. And again, the executive committee is highly interdisciplinary. We also have advisory boat consisting of 10 members and the advisory boat, too, extremely interdisciplinary. They are people from artificial intelligence and machine learning all the way to education and psychology. And I'm sure you recognize at least I am a Howard in this group because she was until recently our colleague, an interactive computing. And with this, I'm going to end the talk. We have a website and you're welcome to visit the website. It also gives some lovely goes more information, but it also provides a mechanism where you didn't keep in touch. We're going to start very soon. And affiliate program that will go way beyond this 25 co-researchers. An affiliate program means that anyone could become an affiliate. And then they have access to inflammation. And potentially, depending on various criteria, access to some of the technologies in some of the data that the encoding. Thank you very much for your attention. Thank you so much to show what a great talk. I'm always just kind of blown away by the scale of what you're working on here in terms of number of partners, the number of students, the amount of data, and all this. I hope we get some awesome questions please post in the chat or the Q and a. I wanted to kick us off, though, and this is very much sort of a general question and I know it's early days. I'm just may not have happened yet, but I'm just wondering what sort of interest you're seeing from the course management companies or the learning companies. You mentioned a wildly as a partner. There you start to see that sort of interest in this, that are kind of a transition plan. Maybe at the end of this effort of how these things might roll out and be adopted. Will we ever see a Joel Watson module in Canvas, for example, the heat. Thank you for asking that question. First of all, let me begin with the last quite a big cushion. Keep, should I stop sharing this so that we can look at each other a little bit more? Or should I keep getting this? I think it's fine to stop sharing. And unfortunately blue jeans events, you can't see the participants anyway. But at least I can see. Yes, I am. So we have just builder Jill Watson, LTI for Canvas. I know what this particular summer we hope to introduce Dr. Watson and many more courses at Georgia Tech. Because now it's no longer just in Piazza discussion now it's on Canvas. And by fall we are hoping it will go to an order of magnitude, if not two orders of magnitude classes. So that's exactly what we're trying to do, exactly the direction in which you're pointing out. We are seeing a lot of interest from our industrial partners, Wiley, a central, Boeing and IBM. But one of the things that I'm sort of hoping for, and this is just a right now hope understanding fully well, that is not a plan or a policy. But I'm hoping that at some particular point, some university like Georgia Tech or TCS G will say that we want this infrastructure for all of our classes. Because for all of our classes, or whether you're online or in person, we want to collect this information and feed it back to learners and teachers. And if that can happen, then that becomes a will of its own kind. That's great. That's great. Please do type in questions. I've got one more comment. And this is a plucky for me before we have really good connections with the Tennessee Technical College System reached out to us number subtypes. We'll be very excited about that. So. Now you have a lot of partners now you don't need more, I think, but whenever you start to grow, they would be very, very interested. I take the Technical College System of Georgia though. So I remember wherever I went to high school there were sure to track down my high-school. You could do the sort of college-prep track or there was actually a vocational track, which was very hands-on through preschool things. And of course, a lot of learning is sort of embodied or it has an element of physicality to it. You'd think that these sorts of online learning systems have a role to play there at all? Or is it just always going to be the case that, you know, the online systems, Hey, stop before they get to that. That's sort of score embodied learning. Yeah, Great question. Thanks. But what I wanted to model tech might happen as we go along, as they always will be in-person education. That's not going to go away. What online education was very important for some segments of students, learners, but even for them, it might become a mixture of online and in-person in a particular, when some one work in an industry takes foster an online class, or let's say for six months, then comes to a university like Georgia Tech, we're an intense it's week course, goes on his or her place, and then continues to restudy education in-person. So there's a different kind of way of thinking what hybrid usually we think of hybrid as sort of often bus and hop online. But this is the hybrid in the sense of sort of temporal sequence to those online and in-person and online. And just so important because for example, take an example of nursing. You will want nurses do not just learn everything online. You will real patients with real hospitals. But at the same time you can do a lot of things on them. That's great. Just have this posted a question, just a battle. Thank you for your talk. With regard to social and ethical and pack, is there a risk that these systems targeted for adult online learning environments will impact traditional learning environments in ways that are not intended. If so, how can such risks mitigated? That it's a good question, Justin, thank you for raising it. They're both opportunities and risks. So one opportunity here is that although we are doing this algae fall online education for adult learners, a one could imagine that many of these techniques, video lessons, for example, could also be used for in-person classes were more traditional students of many of the classes at Georgia Tech, for example, after COVID-19, really are using Somalia that we have hybrid learning, the educational materials that we created during coordinate COVID-19, I know use blue and for in-person classes. In that sense, there are opportunities here, but you're right, there are potential risks. Also. As an example, we know that as people become older, their knowledge increases, but their memory would you speed and the amount of recall can diminish over time. Though, if we're going to build some technology for older or those that may or may not necessarily be use will for the itchy noodle or a 20-year-old in the same way. Because on one side it to a Jew total 20 year old knows this. Then say a 50 year old person, but he or she may have bench I recall both in terms of speed of recall. Um, so there's a risk here. So what that means is that all of the technology that we're developing will have to be contextualized, would have to be labeled in some way. And said, here are the characteristics, what the restrictions that might go on it. It's like when you buy a book, you may get a book and different font sizes for different age groups. Or you may get a book and befriend medium for different kinds of learners. In a similar way will have to think about how do we, will qualifications, will each of this technology so they are not. So that we can mitigate some of the risk factors to get up and touch him. That makes sense. We're a few minutes. Actually over time, could be respectful for people that have to go on. A showcase. Wanted to thank you again for giving this great talk. And I hope that we can have you back maybe a year or two to get this kind of an update on what's happened. And I'm going to encourage our audience members to please feel free to reach out to you. I hope that's okay if they have questions or any follow up. Absolutely. Welcome to and thank you very much GUID, but as invitation. Thank you again. And I'll see you next week, everyone.