Thank you so much and thanks everybody for coming. I'm very happy to be back. So I'm going to tell you about my ideas. Ideas. What do you notice with max? Scientists and others? Biologically plausible? Let me, let me start at the sorry. Okay. Six. In scoring, my computer has only very partial communication with rejection connected here and you just connect it to turn off. Sure. All right. Yeah. Yeah. These monitors while restaurants when I'm summing. It's very odd if you if you full screen on it like the projectors are now. But if he falls screen on it different. If you just full screen difference. I'm doing right now. Okay. Apologies so long. All right, So let's, let me start by saying, and that's what artificial intelligence has done. Especially the last ten years, say the years, but especially in the last months, I guess. I know for a fact that AI is still far behind brains in several important ways. And that's where language interpretation with finding the true meaning of language and basically communicate in decoding the meaning of communication of linguistic interactions. Almost as the reasoning, understanding the world. Invention in generally continual learning, understanding emotions of others, understanding social society. It's not biologically plausible that this is, this is another flaw. It's intelligence, but it's artificial intelligence. So by the way, I'd want to know this. I know it. I found out her own fashion. The tough cuff. Wait, okay, I delegate the day I confess. So this is the output from chat GPT from a couple of, a couple of weeks ago. For everybody, this the challenge right now. So how do we know? This is a lot of stuff that I still got. The question is, how do we fix this, okay? And when do we start? Matching the language is a very, it's a very attractive option because that's where actually the most acceleration, what has happened in the past two years. For me, this is an important fact about learning. Learning cannot be adaptable and continual, okay? No, we are good at keeping learning or their lives. But I believe that the clever thing to do here is to go to the root, okay? Because in some sense, the last bullet is not an effect, but it's an overarching reasonable. Okay? Obviously, ai is behind brains. Because AI systems are not breaking something else. They are not biologically plausible. They, they, they, they, they don't. So these petals, this is what we have been trying. Um, and this of course brings us, I mean, it's, it's the long haul. This is the heart, the bullets to attack. Okay. Why? Because you have to start from the beginning. Number one, and I'm going to clean up, understand neuroscience. So that brings us to our common interests. Neuroscience has its own little hiccups, its own little odd cycles, okay? But you know, we're all after this overarching question, how does the break grid to create their mind? And obviously, we are not very close to answering because although we know that are, here are the two books that we know very well standard textbooks in cognitive neuroscience and experimental neuroscience. And essentially these are the two great works of science that don't reference each other. Okay, so it's not very far. So and so has to be there is, there is a huge gap between, between the students of the brain and the students of the mind. This gap is not only scale, not ten to the 11 is a big number, but also is a gap. Experimental, experimental subjects of course, but also point of view or methodology. Mindsets. And the duct cells recently put this That's what isn't there anymore. You don't put, put this. When he said that we are missing is a logic for the transformation of neural activity in the cell. In this logic as the major open question in science today. So when Santos and I, this, we thought that we were blessed by the Pope. Know that because this is sort of what, this is what we have been after trying to bridge this gap. Okay, Trying to the question is, here is, here is, here are two surprises in the state. One is that axle, who is the most litigated the fanatical impedances that I know uses the word logic with which to my mind, means formal system, which is automatic computation. No surprise is that only establish an hour working on this. Okay. So we know we're not aware of any people tried to reduce it to this bridge. Here is, here is what we're going to show you. It's our approach. We we defined mathematical model of the brain. Yes, you read correctly a mathematical model of the brain. Okay. Except that the model is reasonably wells by the humans. The synopsis, it doesn't move the light directly, captures the essence of what. In other words, really addresses a little bit the criticism of viroids got reproducibility, and the system is able to implement coping phenomena. Okay, so I'm going to describe it. Is this, is this, is this clear. Okay, So I'm going to start, start by describing the little mathematical model. And basically it is a finite number of brain areas. And we assume that each has an excitatory, excitatory neurons. Think of n as 10 million. These, these areas are also connected by fiber. So random connections that there are some pairs of areas are connected this way. Connections between, between any neuron. And all areas are currently connected in a random way. We have the simple, the simplest kind of random graph, which is the GNP. Out of this at any graph, any two neurons have same probability of any pair of neurons. Hazard probability deal with having a connection from the first, the second. And we know that the brain is not wired like that. But we got to use different models of random, random connections and we get the behavior seems to be robust. We assumed that neurons fire in discrete steps, which we all know doesn't have it, but we don't think that that's a distortive assumptions. So here is, here is, here is a major, major assumption that I did step. In each area. K, a small number of neurons fire. And in fact, this particular case that they have the highest synaptic input from the previous step. And this of course models look having condition. So this is, this is getting close to full description of the model also. And then I can be doing in step four this inhibitors of course the next step. Finally, plasticity. And we have very simple Hebbian Plasticity that if to connect the neurons fire together, in other words, the presynaptic neuron fires. The postsynaptic neuron fires the next time step. Then the weight of their Synopsis multiplied by the ease increase that say, by five per cent. So in many ways, the only things we model, this model is plasticity or randomness and selection. You think about it, these are the three main forces of life. And the typical parameters. You can think of as n is perhaps 1 million or 1 million. K is roughly the square root of n. B's 101,000. And that's the probability that two neurons are connected. And beta is five per cent. Okay, So these are, this is the model, this is the parameters, the parameters and in fact for you. Okay, so that are some details that I'm not mentioning, that I'm not very, very, very crucial. Point is that this defines a dynamical system. The state. He's the uterus that spike and the synaptic weights. And then keep that area. Okay. The next AND function is well-defined. Now, you've got nervous system. And what do you notice that the emergent behavior on a very strong, robust and measure behavior is what we call assemblies of neurons. What everybody calls assemblies of virus, which is representations. The outside world objects through or ideas through subset of the uterus urinary. And by the way, I've got any questions. Am I losing many people? Sorry, t is an important parameter which is how dense the synapse. There's nothing networkers. How many? What's the probability that they might take two random neurons, they are connected by a path by synapse that's uniform. In other words, that doesn't change. And we know that the brain is not like that, but, but, but sort of, you know, using more sophisticated models that try to imitate the break better does not change much their behavior that I'm describing. Thanks. Anything else? Yeah. Okay. And I'm I'm aware that this is not always what I'm doing. Okay? Yes. Yes. Yeah. In other words, we need many ideas for things to work for them. Okay? Okay. By the way, this simulate the simulator or this available online, okay, so it's in fact, in order to start it, you have to die for you wanting to load the brain, which is a great thing to do 06:00 in the morning. And by the way, this point I get questions. The question, how high is computation initiated that typically by external stimuli. Okay, So we have a simpler way of modelling external stimuli and input. Assemblies of neurons have something of the soul of this, okay, and as I told you, it's an emergent behavior of this simple mathematical model. I'm assuming that you are familiar with what they are, but let me know that have happened. In 1949. Neuroscientists have been trying to find them in the brain for a long time until technology became good enough. Pushback is group can Harris were the first to measure it. Mainly out. By now, many people study them, create them, or manipulate them, excite them. And so, so, so it's, it's, it's a well-established feature of brains. And one that many people believe that is very important for the way the brain works and how it creates the mind. And we know who's likely e.g. a. Uri booths idea why. He's recent popular book calls them the alphabet of the brain. This is a very wise because I think that assembly site where sort of symbolic computation becomes stuff for the gummies symbolic. Now, assemblies have some interesting behaviors which we'll be studying for many years. And I'm listening to the left. I simply projections. So it means that if you have a presentation in assembly in one area and the setup is alive, that is e.g. a, synaptic connectivity from another area. Then this assembly, bye, bye, bye. Firing for it a few times, can create a copy. On the other area. This copy is sort of a stable Colpitts and no assembly, but also it has the property that every time this assembly fires, this would fire. There is something called the reciprocal function, which basically it goes either way, whatever the copy fires, they already do know also that our association, association is the following. That if for a few even up to stimuli, e.g. my brother, the pyramids they have created our sample is in my brain. Find out what my brother is. The pyramids right now. I know these assemblies are going to change a little or intersect more. That the association, what that accomplishes the assembly Saddam, stable enough so that if you ignite five per cent of them will be drawn on one of them, is going to be 99 per cent is going to manage that. You can create. If you have two assemblies into areas, you can find, you can create an assembly. The idea that essentially can represent these assemblies. This is how you can do trees in the brain. Okay? Sequence record an important, they're important property of assemblies that we, we discovered recently. But basically, if you, if you have three stimuli and you project them into the brain in particular sequence, then one of these new assemblies fires a sequence. We'll complete itself. Very robust. Okay? So an interesting way of remembering the rotor, okay. Of course, synaptic plasticity. So those latent synoptic seen, I've been blessed. This has many, many other, you've got, you've got basically feature brain to act like and you find that machine, okay, So let me know what you can do, things like that. And also few-shot learning of simple classification tasks. So basically, if you, of course you can, you can, you know, if you have, if you have a stimulus, you can memorize it. Okay. Hippocampus does this all the time, but several times every second. But here's the point that if you have a glass of stimuli, e.g. the face of somebody you know, that have some flow. Of course, a lot of processing by the visual cortex not have a lot of intersection that are almost the same except, except for, except for, have a lot of overlap between. Then you can, you can this class of stimuli, the future recognize this person even though, even if this person comes to the angle that you see this person from an aggrandizing. I've never seen that before. Okay. So, so all these behaviors, so assemblies and sort of, you know, a lot about a world with Santos marks millivolts. Noise is trying to prove theorems that convinced you that visa with faculty will happen with high probability. Heaven forbid, you realize this is necessary because the whole basis of this is the probabilistic Conoco, random connector. Some astronomically small probability. This random connectome has no synapsis whatsoever. So you have to say with hypermobility because otherwise it's not about just clear what I'm saying. Alright, I'm least synonymous with brain regions, or normally, everybody's value has multiple assemblies. Showed you I'm going to talk about the brain region. Your brain. That's the lexicon. Where to go, where the knowledge resides. In every lab, of course, they're going to live by chance. They can overlap, deliberately encoding some semantic connection and so on. Simple, single assembly limited to one area. Yes, yes, yes, yes. I didn't talk about that. Yeah. It's a great question. Yes. But the but the book will revise that. Thank you. Anymore questions? Yeah, I should probably be interesting. And recurrent connections within them suddenly. Area connections. Yeah, sorry, it's just that I'm going to go next. I need to send me I'd ordered two different areas. They have the same probability of being correct. So it's the simplest possible model. I mean, no, it's magic is that it can, you can prove theorems using it. Then you can say, okay, what if it's a little more complicated? And you see that the same behaviors emerge Anyway, so, so you, so you stop worrying. That is one part of my narrative that I left unexplained. I told you that some areas can be inhibited and disinhibited okay. From time to time. And how is this done? Here is sort of, you know, I promised the last element of our model, which is long-range inter neurons. Okay? So these are long-range interneurons. I let them from, from Europe. Exactly. From his work. And we know they exist and they get heavy use in our model. Okay. Criticism about those. So we know that recently told me he spoke his mind to me that vector, what we call long-range interneurons that are other mechanisms of the brain that could do, could they could replace them like thalamic cortical control loops. Okay, So I know that have similar effects, but no, we have here to fix ideas. This mechanism of longer agent than yours. Let me tell you how they work their way out of it. The green areas. And they have the following property that they can, even other areas or other, or other, other populations or long-range into New York. And they can be, they can have synaptic inputs. They, they can take on tight together with ideas, with particular assemblies in areas. Okay. This is what this arrow says. Nozick can be recruited by SDS and assemblies so that they can do their work. Remote work, okay, inhibited this inhibitory mortality. And these are the control of, these are sort of, you know, the programmer web system. Okay? So the computational system and present view is essentially a hardware language, programming language. And, and you know, all of its components right now. So you've got set some one of these app and then wait for stimuli, okay, and see, we'll see what happens. Okay, That's what I said here. Are these, these long-range into neurons recruited. So they were pretty my specific assemblies target specific assembly hall areas. And typically they are recruited by other element assembly scenario. Yes. Any spontaneous activity in networks? Not, not in mind. Okay. You're right. I mean, once what the activity is done is that we have some kind of know that. But we don't we don't know. We we have not. We have not. By the way, I'm not sure about the answer to your question. You know, you tell me if any of what I'm going to tell you, this is not okay. Okay. So what I'm telling you is that, is that if you put together that I use in assembly, so the whole model and I showed you these together. They give you all of computation, okay, So we know that we have proved that, remember m is a huge number, n is a big number. So the square root of the ratio is something like 100. And you can do arbitrary computation on another computer science here, the allies that have already been computed a arbitrary computation is a lot of mental computation. Much more than I believed any one of us can do. Okay, So let me know. If you if you are if you want to have a system that does though, has no obvious limits, upper limit. So we know that our limiting, you safe with this, with this, with this number. This number. Okay. What I want to show you, what I want to emphasize is the model that I showed you is something that we simulate. This is how this how this is our model and we simulated this is how we do our work. And the software implemented neuromorphic computation. I mean, I don't know if any of you are interested in your model computation, but let me know. This is a competition. This is software implemented neuromorphic computation to simulate the steps of the Model T. Well, one of the steps where every steps is a spike of some spiking neurons. One is computer time, which is proportional to the number of areas is hidden even though the patient, so maybe it's a few dozen times smaller than the small number, like ten to the minus three times n square. Okay? That's huge. That's n to the 14 or something. Something times t. Where t is the number of, t is the number of seconds times B, because we have assumed that this works at about 50 hz, or the cameras as a gamma oscillation. And this is very daunting, okay, so we know how to do that. But we have found a clever algorithm is something called lazy simulation. Computer scientists essentially says, if a neuron has no spike, so far, ignored it, pretend it doesn't exist, and introduce it when it first spike simulation technique that uses this to be k square, t square. So basically we have, we have a factor of t for n square over k squared, which is huge. And this is our breakthrough. It allows us to simulate a few seconds. And 3 s is a very short life for the brain. But 3 s allows us to simulate some cognitive phenomena. Okay, and this is what, this, what I'm going to show you. The t squared term, does that mean that you can't go wrong? Yes, exactly, exactly. I think works. Yeah. Yeah. Yeah. Yeah. Yeah. Exactly. Yeah. Yeah. So these were eventually wouldn't do that. So we know we know that. But then sort of know if you want to do something more complicated, then you have to do the following. I have to run it for 3 s. And then we're going to say once again and hope that nothing that babbling was what happened before. Okay, You know, that's alright. Okay. So this one I'm going to show you now cognitive phenomena, how you can implement with this model Colgate, if unknown. Remembered about cytoplasm. And we have done a few things here. But what I'm going to show you is language, okay, so we know what we showed me. The most exciting. So it's the hardest thing. So this secretion is the hardest thing that anybody has ever done. And so it must tell us something. What I mean here is why I think that language, I know that language is not popular in neuroscience. And let me tell you why. It's essential. Language is a living fossil fuel the brain. By this I mean, language didn't exist 3,000 generations ago. And now it exists in different forms. And these 10,000 different languages have evolved. Why, how, what was the, what was the, what was the thickness, the thickness wars. So domino that, you know, every step was trying to make them more, more conducive to learning by baby brains. Anymore expensive and everything extracted and everything. But basically its affinity with the human brain is what is what led. This is a mirror. It's an amazing leader of the brain and its dangers of standards. If you are interested in the brain. And we have none of you thinks about how about language for a long time, for 150 years. And now we know what an amazing on more. Okay, because there are some great experiments. I'm going to show you one of these experiments, that one, the one that actually Inspired me a lot. Okay, so I, I saw I talk about it by chance and I was captivated my interest immediately solid. Let me show you what they did. They had hundreds, perhaps thousands of subjects in many languages. And they hit and they were hearing recordings of a single syllable, words in many languages. Spoken out for Harris. The same, the same speed i'm, I'm talking about now, but more and more and more rhythmic. And then what they did, they took all the data, the MEG, to call them EEG data. And what they did is they took the Fourier transform and guess what? There was a peak at 4 hz, because obviously four times every second the brain had to fetch this warm and see if you knew what it meant. And then do something clever, very repeated the experiment except that every four words made sense. Okay? And guess what happened? Three weeks now. Of course the moment for, but then there is because one terms, why? Because once every second brain created the sentence. So this has a different signature. And twice every second, the great Canadian, the phrase right there, we know it has a different sequence. So what I see happen essentially is this, okay. Brain was great interests as it was here in it. Okay. I mean, I don't know. The available resided. You're actually but there is no other rational explanations. They absolutely going to have several cyber experiments, screw rule or rule out alternative explanations. And it'll, which is amazing because, because we have three snow or rain, We can hear you create the right now today, see your brain. And you create these trees at foreheads, okay? In other words, you create every 12 spikes in your brain, you create a new node of the tree. How is this done? And he had his shed is what we got out of this. Let me know. Can you simulate parsing? It is something that I've been working for a long time. Okay, and finally up what I'm going to tell you is we did implement a Bachelor of English that is implemented by spiking neurons. Okay. We know he's permitted, but we know that since then when I was a graduate student, they were very clumsy parsers of English. Okay? In the 1990s, you start to have more sophisticated parsers. Now you've got incredible parses. This is one of the worst that, okay, you know that I'm going to show you the Sony advantage is that it is, it is implemented by neurons, okay? But it's implemented in them, biologically plausible. And so here is that a good picture? You have these other brain areas. There's a particular resist, special brain area called the lexicon, where you have all the words that are fibers connecting decent these areas, these areas are suggestively labeled by member, by syntactic roles. And let's see. What we have done. The Bunsen and subsequent work is we tried to make it compatible. The consensus in neurolinguistics, ok, There are a few subjects or wheat in your linguists have a consensus, e.g. the movies Alex. And there is a place where water's fine. The syntactic role in several other things about how semantics that are presented. So, well try to be consistent with any place where linguists have the parser, a sequence. So we assume that phonetics has been sold. Here's what I mean. Your brain when you hear me speak, does an incredibly critical thinking. The beginning, that it takes my son waves, makes that makes them, he still finds the word boundaries, makes them into words, and then find the appropriate place in lecture where we, basically we've done is copy this. Okay? We know we say that basically what we have is a sequence of excitations of words in black. So this is, this is our assumption. Because the amino VCs, phonetics is an amazing miracle. Three, three-month-old babies know how to find what boundaries. I mean, nobody knows how this happens. Okay, So it's a pizza problem that computers have had until recently, problem-solving. So each word assembly, Assembly in the legs, they decide. So there we go. Instead of presentation has an action set. It is inhibit these inhibit action supplemented by local agent or neurons. And these transit of every word implements and reflects the words part-of-speech. Nouns have a different action set than transitive verbs. Different extra set of interests. When the word assembly fires. It's actually these exercises executed. And if you think about it, if you, if you are looking for a word to call the sum total of origin sets in the lexicon. I know of no better word than grammar. This is a grandmother of a bunch of long-range interneurons. Suppose that they put these cats, dogs. Okay. We started like this, the initial state. Why is this the initial state of the machine? The initial state because this is English. And we know that we're likely to hear the subject before the, before the object, that we're the first noun. I know we do, we see, we want to predict if this was a gentleman. Gentleman is loose or advocates. First. Gentleman is flexible. I mean, so we're going to look at how the vertical or objects. Then we'll be starting in a different state. So gentlemen, babies have a different style start state of sulfur. So when cancer is heritable, then basically it is projected to the subject area because that was available fiber. Then the fiber is inhibited again. Then, then then, then then the particle voucher set of cats is blow up in these providers to connect to the variable. Because a variable is being expected now. Then when change comes, it is, it is, it is projected. And then because T is knows that the verb, it opens up. And if I go to the object, because the object is expected now, when Scott's Run, it would not happen, okay, because he's not a transitive verb. Then dogs comes and it is connected. It is protected. And now you are trying to be disbelieving. How do we know that the sentence was parsed correctly, right? Okay. Here is how after this input, high synaptic weights for everybody and dependency theory. So there are some side effects of this, of this processing which can be recovered and convince us that we have evaluated dependency tree of all of that. Really, this was, this was the structure of the center of a set of a sentence. That dependency tree is how NLP people, and there'll be folks and sort of statistical linguists like to treat the sentences. There is a tone skin sort of universal grammar way about him. And he put us out that our system for stripes also a basic value plus three. By this I mean the parse tree of the main constituents of the sentence. I mean, we don't have it anywhere for long centers like setting appropriately, take guts very often chase even it was fierce as dogs. We would create only the view of the history or the top five, top three boards. So if you look at it as, as, as one of the computation, It's implement exclusive stylized neuron. Okay, there's nothing, nothing more is at work. And we need maybe in excess of several tens of millions of neurons and trillions of synapses. This is more, more than the best the most advanced neuromorphic chips can do. And buttresses sentences like this. The young company that makes house. So the wildcard of the main subjects quite clearly suspect quite good. Speed is roughly the speed I'm speaking now. We also have running parsers. So, but I've seen Japanese who got in Chinese. And there is no reason to stop that. The code is available on like okay. You're gonna play you can play with the bus. Yeah. You mentioned yeah, hold the fibers connecting the different parts of speech that in different languages to reorder this so different frameworks. Is there a way to fight? Or do you have yeah, yeah. Yeah. So so I'm going to talk about my last my last subject is language acquisition. How does yeah, so I'm going to, you know, this is this a wonderful question, has been bugging us for a long time and now we know we're facing. So the question is, how do you get this? Okay? How did you, how did you get this mature language organ? This is really exciting. So as soon as I have it under central subject and what's not connecting with sperm. It depends on the war. And then there must be, you should the diagram of the office and connections connecting to each other. But then there must be an arrow public directly, probably like drunk by synaptic depression or something. But then you could think about the person, then maybe the order of the subject and the object. Which then like how do you change that direction? So by rewarded is not a problem. I mean, I don't know how to, how to model. So by writing on it, you can host the same in the same lexicon, in the same both, both English and Spanish, or English and Arabic. So it's not me know, we can do this and probably probably it will become clear as I go on. Okay. But yes, I mean, no baby babies that are not Abby. Yes, I know this. We don't have to do it because we know the challenge. Is there a way that is not specific drainage. Okay. Sure. So I mean, when I say Chinese this month at it, I don't know if you've got like Etsy, not know how many users are saying. Do you need more than means? Lots of wine of the menial tasks that yeah. Yeah. Yeah. So it was worth it. Yeah. It's not we're not wasting any uterus to language. Don't worry. I mean, I don't blame, you know, so, so I think maybe mother, Mama has enough neurons sort of winnowed to host the same, the same. It all. But, I mean, no, but do we have, we have so many more neurons than others. Okay, Because, because you want to know where to, where to protect and more importantly, when you want to protect various already know that while you are doing your projections, you only protect the other areas okay. When you are subject to notice. Okay. This is how this is this is how this house stump finding more ways to do it. Occasionally, different ways to do it. And this is a place where there is no consensus. Linguists don't, don't agree. I mean, frankly, long-range inhibitory neurons is something that I like and who have observed hippocampus. Ok, so I'm not sure if this is what it is. You know, anything about works like this wouldn't be enough for us to do language. Okay. So we call them long-range interneurons. Don't mean chick. Okay. So we know that whatever the brain has that can do this work. Okay? Can, can protect areas from inadvertent. Yes, yes, yes, yes, yes. Yes, exactly. Yeah. Yeah. So I knew who have houses behavior. What do I mean? I told you that Diane believes that it must be, sounds like some kind of pharmacological loops. So minus five, maybe 10 min. Okay, cool, cool. Okay. So I'm going to skip much, most of these, but let me know. I mentioned Chomsky and mentioned statistical in ways so that we know. So, you know, but, but the, the, one of the top tier schools already know one of the patients was the recursion. Occasionally we know that, that synthesis, synthesis kind of good sentences. How, how does the brain do that? Here is a, here is, we know I always said the young couple in the next house was fine. The young couple who lived in this house would crash our compiler because say, What? A second verb, live. What is this about? Hey, you know, So, but, but, but obviously this is a fine centers. Okay, so we've worked hard and we found out how to do this. One thing is that any language theory is an audience that I want to tell you about this experience, is that in order to do this, we had to introduce context-free languages. In other words, we're headed drug abuse, the whole of context-free languages. In a way that justifies Chomsky for coming up with this, this, with this rather restricted and the end overly general notation for a language like 60 years, 70 years ago. In any event. That's that's so e.g. how do you how do you bought us this? And we have we have a good anybody has photographic memory will know what I'm talking about. I'll be happy to talk to you afterwards. All right. So let me, let me get you out some wonderful visual with Barcelona telling someone's. Now we have Barcelona can do so much more that we have, we have, we have better way of showing these parts. Okay. I think I answered this, but I'm telling you again through but ours is biologically plausible implemented exclusively despite your website NewRez and you see no food. That's important. We see no fundamental limits to how we do it. By working harder, by introducing more features and so on. Okay, so, so I don't see, but we cannot capture all of language this way. Okay, no. That's my point. So how are you going to test your theory, humans? Okay. That's a good question. That's something here. But I didn't know if it is approved constant. Okay. So in other words, I'm trying to convince you that the language does not require divine intervention. From, from, from the neurons. We know. You've got abstracted of course and all that sort of no, can have lunch. Okay. This is this what I'm Dr. okay. So it's not a theory. I'm not I'm not, I'm not a binding that this is how the brain does parsing. Even though I would love to be so lucky that the brain does parsing exactly the same way. In which case I have some predictions also. Very, very interested in the question that was asked before. Okay, how does this complicated mechanism come about? Babies, both the medial and the baby is born in Japan. Okay? How come the same brain, so we know these two different languages. Okay? How do you, okay, this is, this is this is what we have been thinking about it. Okay? So this is a question that's sort of very interesting. Experiment. Let me know most of your experimental. So you know that both experiment and neighbors entitles you to ask new questions. I'm asked, Greg. I don t think that this was considered a question we should ask. Somebody can ask. What is anyone not based on language? Can this be the neural basis of language? And that's it. It's really not because the heart, because the heart is filling the blanks. Okay, So the question is, how do we do this? How do you learn the words and their meanings, which will have talked about how do we let this index, how do you determine the names of these areas? Okay, They're all of these areas. How do we do this? And that's my granddaughter. How about semantics and germination, okay, the opposite. So what makes us, makes us generate language, what I'm doing now? Okay? Then you'll have a base of language. I think it's something like this. In other words, a bunch of areas, synaptic connections made it happen. So billions of neurons. Hardware for semantics degeneration different. Whoever does semantics different, we'll have that generation plus the ability to learn all of these things. Okay, So basically it must start from a rural tabula rasa. And so basically what we're doing now is by loads category browsable language acquisition. What are we trying to do? Hardware is a tabula rasa of couple of those in brain areas, fibers, sub-populations of neurons is a modest amount of grams in cell potential language. By this I mean the following, that the baby is exposed to language does not have to be indicated. Mother, who, who who knows what she's doing. Okay. But these exposed to language, which is grounded, happens in the presence of objects in the world and actress in the world. And share that tension. The learner knows what the word is looking at, has, has had that attention that okay, So this is the only thing that is a mature language Oregon with representations of syntax and semantics, a partisan and degenerate. Okay, So this is, this is, this one, this is what we're working on. And people I've talked to before. It's a little scary that I don't see. I don't see an obstacle. I'm pretty sure that within a few months where I have something. Okay. So ignore this thing for a language that has I don't know. Maybe how did it go towards okay. So let me let me tell you, why don't we are we have phonetics hasn't been solved. Okay, so, so we only rarely talk from the, from the symbol. We know, we know how we can implement letting off. Now. We know how the presentation of dumb semantics, you see, when you do pass it, you don't have to worry about semantics, okay? But here, semantics is towards the front. The presentation of nouns and verbs and mechanics. Then how to generate two word sentences like Daddy left, which is the first thing that, the first syntactic thing that babies do, lettering the word order of the language. We have two ways to do that because because for them excited and we are. So the hard case which is let me sit back, Picasso words and language. In other words, when this is bone, this is, this is, this actually is called because when you hear just synthesis, dog jump up. You know, if, if, if a baby here is dogs, jump, does not know and seize those jumping. It doesn't know if dogs, if this is English and dogs is the animal or this Arabic in dogs is. Okay. And it turns out that by presenting a few dozen such sentences are having those veterans and having those and you can actually model, can come live. So they can figure out syntactic role of the words. Let me quickly tell you what, what, what, how semantics that are presented. Dog, dog, dog is this sort of very concrete objects. So it has it has it has a phonetics roots or which I'm not, I'm not talking about. It's rooted very, very forcefully here, but it's also rooted forcefully to vision. Because concrete, a concrete things are rooted in visual. Concrete nouns. It may have some, some connections with motor. So these are all peripheral Samantha carriers. They are close to the corresponding vision is very close to the visual cortex, okay, to inferior temporal cortex. We know this from, from, from maybe it has an auditory signal. Maybe it has an emotional signature. I love dogs, I'm afraid of dogs. Okay. But, but all these things define the semantics of the word dog for disagreement. Okay. So we've done a deployment, so the wideness just a hub and spoke semantics of a fury in neurolinguistics. In other words, that, that's how, you know that you relinquish thing that impede the war started over the presentation. So the symbol and the body was dividers aspects of its meaning. In some sense, I mean, that's not unlike what we call word2vec because we can visit every work is a large vector over it, presents the world. Now that assemblies don't overlap and the strengths of their connections are dynamic, okay? In other words, how words in the lab overlap as symbols, order, orders, or as square. Okay? I have received a lot of progress before. Okay, that's okay. The dynamic, they can change, okay, That's the details of their presentation, including the strengths of the connections, can change dynamically reflecting experience. In other words, the semantics of view of the world. Single lexicon change every day. So many words, the changes slightly better meaning after my talk. Things like minus two luminaire asking you as to what is answered automatically, like Word2vec and do that, but we did not work. Here. You don't even know. Just just by just by saying for somebody to whom I'm asking you is to what? Queen comes to their mind. The word queen comes to remind enough mean our model. Okay, So that's, that's a, that's a, that's a that's something we have, we have implemented. I assume it might be noted that will be disambiguated correctly. I saw the three using by three what might be talking about candy, okay? And the reason is that we noted that have more semantic context overlap and therefore the synoptic flow between them will be much stronger and the thing would be assigned to that. And of course, you could be wrong. Okay. To conclude, as Max was very kind to say, I've been in the beginning, I have, I have ventured in many fields in trying to solve their problems. And the idea about the money has been incredibly bottomless. As the brain, as difficult, but also as rewarding. As I said before. I don't think that neuroscientists have been ignoring language at their own risk. Because, because the language has a lot of room for, hides, a lot of information about our brain. And one of the most gratifying things that work with language implementation the brain has given us is sometimes super bright to ask questions that have not been made before, okay, because they were considered blasphemous. And we have made progress in biologically plausible acquisition. So here's this, I promise you waveform forwards for in AI, okay, that's, that's a small part of the audience is interested the question, I mean, I think so. But really, what is this approach? The right way to go about finding access logic, okay? So these are my collaborators. Mike's is no dogs proposed is also going to just fill in Colombia. Mike audience is NOP receptor. Thank you very much. Chris says for his talk, we're going to forego the questions for just for the sake of time, but I invite anyone with questions to come up to the podium and those questions afterwards. Thank you.