You know I've. Yes there you lay it all. Every right. Kind of place that. Out of Here. Boy. Right. Yeah. And. Network. Well I mean this is. Thank you Paul. I'd like to thank Paul for a very kind introduction. Although the twenty year many things and all this makes it sound like I need a walker. So I'm not so sure anymore. So actually given all those things that Paul mentioned I may tell you about something completely different. Now in a physics department as we all are in you might be wondering why are you talking about neurons and I hope by at least slide number three I convince you that there's an interesting problem in statistical physics buried in the system. What I'd like to tie. Well you about is actually not as the abstract says which is a little bit of a lie thinking and consciousness and all those deep ideas but instead actually something underlying all of those I believe essentially what I'd like to do is tell you about if you like some of the sub routines that our brains run. And hopefully the computer will go. OK. There we go. And. We It went so well. It jumped twice. OK we may have all kinds of amusing computer related events today it seems but I'd like to do is tell you two related stories. What I'd like to do. First of all is tell you about a central pattern generator. It turns out that we have many pieces inside our central nervous system that are essentially Autonomy's hardwired programs. And these Autonomy's circuits produce out of simple neurons complex dynamical properties and it's an interesting physical question Anderson and how they work. I'm going to tell you about one in particular which is called the pre-bought singer complex that is actually responsible for the inspiration rhythm in breathing so all you're all sitting here looking at me I'm not so exciting that you're breathing hard. And when you're not breathing hard. In fact your breathing is controlled by Inspir atory circuit. That actually activates your diaphragm and a bunch of other things to make you inhale and you essentially exhale because of the Alaska city of your chest at this point. In fact the pre-bought singer complex is the evolutionarily fairly new advance in the thing turns out it came about with mammals. Unlike many academics mammals in general are high energy sort of creatures and because of that they need a lot of oxygen. So in fact the evolution of the mammal involved the evolution of the diaphragm and the evolution of the pre-bought singer complex which apparently drives it. So what I'll tell you about itself tell you about this collection of about a thousand neurons fictive leave all identity. They're not all exactly identical like atoms are but they're very close in their dynamics and none of them are clocks. But somehow they conspire to produce this metronomic clock like signal which controls our breathing. What I'd like to do after that is tell you about something which is in fact related it turns out that when we're not thinking too hard in particular in anesthetized animals. The interesting part of our brains the part that actually involves things like figuring out how the pre-bought singer complex works. The neocortex. Actually produces its own sort of rhythmic signal. And I think we can actually show that simple models of interacting excited Torrie neurons naturally produce the sort of rhythm. And they do so because of a certain type of non-linearity built into the response function of a single neuron. So I'd like to try to explain these this picture of simple neurons interacting and I'm networks generating complex electrical patterns as the theme of the talk and I'd like to give you at least two vignettes of how this goes on. So here's the basic story. I'll start with one and here I promise you there be a slide mentioning. Why is there some interesting physics here. Well. The key thing to me it's interesting about the problem is that this to me is a classic example of emergent behavior in a fairly simple physical system. Each one of these pre-bought singer neurons is not a clock at all. It's not a bad clock it's just simply not a clock. But somehow when you couple them together with sufficient numbers they spontaneously collectively oscillate in this metronome Aker clock like way. Whereas if you start removing neurons from the circuit you actually will see dynamical phase transitions between clock like behavior and quiescent behavior or clock like behavior and chaotic behavior pending where you are in the phase diagram that will explain to you. Where's the new idea here and what the. Thing which I'm really trying to pitch to you is the following. I guess a great deal of work has been done in trying to make simple low dimensional models of neurons. Of course neurons are highly complicated devices with many many independently opening channels that let ions in or lead ions out and these interact with each other in very complicated ways and if I were talking to people at a drug company which has in fact never happened but if I did they would want to know how to manipulate those channels and how you make them open and close how you get more neurotransmitters in or out things like this but if you don't do that or you want to take a step back. You can actually do very well understanding the dynamics of a single neuron treating them essentially as nonlinear dynamical systems and this of course is bringing coal to Newcastle coming to Georgia Tech and talking about non-linear dynamics. But what I think is the new feature is that people have spent a great deal of time particularly Eugene as of age trying to build nonlinear dynamical models of different classes of neurons and asking in what sort of low What sort of low dimensional representation of a neuron will reproduce these sort of characteristic behaviors in fact here's his pre-bought singer version. Well what I think is the new feature is there's a great deal of information stored not just in the nonlinear dynamics of the individual neurons. But stored in the network in which they are. So what I'd like to show you is that top logical properties of the network of which these guys are connected actually controls the collective non-linear dynamics of the system so. By the way the picture you're probably wondering why this picture was here. I set this up to this is actually from minute one of the Italian. Now aside from being an artist. They issued manifestos. In fact according to Wikipedia which is that I source of all facts outside of physics. The Italian. Futurists issued five hundred something manifestos in one year by my count that's better than one a day so I figure I can have one manifesto a lifetime. My man. If I can join the Futurists for a minute is the say that there's interesting physics in looking not just that nonlinear systems but nonlinear systems coupled on top of logically complex networks and I think features of the network topology are really what should be focused on because they haven't been so far so there I now have a manifesto. So let me actually show you a picture of the pre-bought singer complex. I work with a gentleman named Jack Feldman at U.C.L.A. and actually with Robert Burns one the physics department on this problem and I don't know which end of the rat has the brain to be honest with you but Jack does and if you remove the slice of the brain stem of a rat. What you can do is you can actually take out the pre-bought singer complex and have it work in a dish in isolation with no connection to lungs or breast of rat. So what's nice about this is it tells you what Tori properties of the system are truly in dodginess I don't need the rest of the rat to make this thing work. I don't need a brain. I don't need anything else. What's also nice is it just so happens that the collective output of this pre-bought secure complex goes out on a cranial nerves which is in the same slice. So you actually have the system pre-wired for the experimentalists you can actually hook up your lead to one of these little bits coming out and see the integrated activity of the system whereas you can also stick in Needles in different spots and look at the act and if you're clever. Let's see if this works you first of all have to find exactly where the pre-bought singer complex is you can actually change its chemistry by well. I don't know how to do these things myself but Jack knows how to do animations what he's showing you is that he can actually apply drugs. The pre-bought singer complex and watch how it changes things. I'll come back to that point a little bit later in the talk. So at this state. I think the movie is not going to run it will not run. So it will decide to live with that. So at this stage what I'd like to do is give you a very simple mathematical description for how I want to think about this obviously very complicated system. I'm going to take two elements that people have looked at before and lump them together. The first element is what people call the leaky integrate and fire model of a neuron. Basically what this is is it takes the complex structure of a neuron and reduces it to one module. This one module is essentially an integrator receives but it's a leaky one so it has a R.C. time here. Which is typically on the scale of milliseconds in fact that tells you the size of the spikes. The width of these spikes I should say. So basically if I try to charge this thing up. It will just drain charge back out to the ground on a timescale of Tao when it receives a signal from one of the one of the neurons it's connected to it will receive some sort of voltage increment at the cell and now here's the key feature. I guess I should go through all the terms first you'll notice the sum is over all neurons other than yourself for this is the voltage of the neuron this I'm going to call the adjacency matrix of the connectivity matrix. It's a matrix of ones and zeros. If M I J is one. It means that neuron J. actually sends signals to neuron I and it had zero it does and. And this is the firing rate number of spikes per unit time of the neuron. Now the key feature in the what reason that neurons can actually perform a non-trivial computation is that they have a nonlinear element in them. And that fundamental nonlinear element that everyone worries about is the fact that the firing rate as a function of potential measured relative to. And outside the cell is strongly the pendant on that potential so neurons are in the business of maintaining their potential at something like minus sixty or seventy millivolts relative to the outside world and they do this by pumping things like calcium and sodium and potassium. However when you raise their potential to about minus forty or fifty millivolts all of a sudden they will start firing action potentials they will actually. Very briefly push their potential to about plus one hundred millivolts and it will take a down and there's a short refractory period in which you can't do anything. It sort of resetting itself and then it does it again if you take a pre-bought singer neuron. Out of the singer complex and you put it in a dish by itself. You can change its potential by hand and ask what its firing rate is. And in fact you get that sort of sigmoid all curve like so. So in fact here you see data from one lab where they in fact inject current into the cell to raise its potential. And what you see is its resting potential is about minus fifty nine. I guess millivolts if I raise it to what turns out to be about minus forty five or minus fifty it produces an action potential it discharges itself a little bit produce another action potential and Essentially this would go on for a long time. In fact you can see the spacing between these pulses is quick it's on the scale of ten milliseconds or so and in fact if I were to keep stuffing current into this thing it could keep firing for a very very long time. So in a way it's a very simple computational device. It's simply asking It's itself is might be to has my potential been raised to this threshold point and if so I'm going to make lots of signals and Wonderfalls below that I'm going to make signals typical firing rates are something like sixty or seventy hertz for the pre-bought Singur system. Maybe a few hundred hertz of the fastest. For some other neurons and for the. Boxing or complex even when it's at the resting but tension of about minus sixty millivolts it has a firing rate of a Hertz or two. So that's really what it does for a living. How is it that you can take a network of these things and make a clock out of them. So you might imagine what you have is you have the room filled with a bunch of amplifiers with a speaker and you could easily imagine a type of exponential growth of sound in the room due to feedback feeding back feeding back and so on one neuron excites another which excites two more and you get the sort of runaway effect. So you could imagine these guys could all mutually excite each other and make everything be active. But it's not easy to see how you then shut the system off. Wait is fine. Period of time turn it back on again. Shut it off turn it on again and so on. How do I make a clock out of such a simple model and the short answer is you can't with just this picture. If we want to understand how it's possible to make just excited Torrie neurons coupled together into something that acts like this metronome. We have to do something more. And the something more means we have to think a little bit more carefully about how neurons actually communicate with each other. So. Here if I decide to blow up that point model of a neuron and actually look inside and see what's going on. I realize that neurons actually have many compartments. First of all the neuron has the one point I was talking about which people call the soma because the apparently biologists all learn Greek at some point in their lives. Neurons have a big antenna array where they receive signals which they call a dendrite or dendritic Arbor. Again the same classical education pays off. I guess. And they have a C. they have one. Typically ramified output signal which is called an X. on. The X. on hill it is actually where the action potential is produced which travels down the axon as this electro chemical signal until it reaches what's called the synaptic. Junction in life as I said this picture you can download yourself from within pedia. And at that point the electro chemical signal when reaching the sin that junction causes the release of neurotransmitters into this nano scale synaptic cleft which actually opens membrane channels in the other neuron the what's called the posts and after neuron allowing the potential of the dendrite of that other neuron to change which then propagates back to the so much where the integration if you like happens. So that's really what we need to know the key feature here though is the currency of this exchange is potential but neurons don't directly control their potential. They control their conductivity. The only way that the neuron actually knows that he gets a signal is by locally changing the conduct to video of the dendritic membrane. That actually means that the dendrites are inherently nonlinear signal processors. If I have a signal entering the dendrite here and propagating down to this way. If in fact another signal enters if you like downstream. Then in fact the local conduct of any of the section of the dendrite is enhanced and some of my signal here is now going to be shunted to ground. So if you think about it. I actually have an antenna. But the way that the antenna is receiving a signal is by locally manipulating it's conduct this is been known of course for a very long time and people said well that's of course unfortunate because it makes the problem much harder and in fact people in the past that sort of non-linearity and dendrites the cell tries to avoid In fact if you look very carefully. They say at a variety of neurons you'll find actually on the nano scale there are lots of little buttons or what they call spines and the synapse is actually not on the main but actually on the little button that's separated from it and the idea is that way. You're not actually changing the conduct to Vittie of something that is in series with your transmission cable. Now it turns out pre-bought singer neurons don't have the spines. So in fact they don't work to avoid this and my claim is they use this dendritic non-linearity to produce a signal. So if I want to include this idea which Christophe. Calls dendritic shunting. What I'm going to have to do is I'm going to need to have an extra variable in my extra variable I can call an adaptation parameter or of a bold like Mr Feldman I will call it calcium. And the reason is as Jack Feldman has done the following experiment. He is injected a key leading agent for calcium into the soma. So essentially what he's done is he sucked up spare calcium which is in there and he said. Does that change the ability of the neuron to receive signals and the answer is No not at first. Nothing happens. But if you wait for the calcium to diffuse out into the dendrites all of them in the he later to diffuse out into the dendrites and suck up calcium in fact what you find is that the dendrite becomes much less conductive In other words it transmits signals better. So he claims that the rate the key ingredient here is calcium but the theory it can remain exhausted on this point what I need is a second scalar variable to make this work. Really. So if you look at it this way I call this the feldman Bill Negra model because all I really did was take what they told me and turned it into a set of linear differential non-linear first order differential equations and what they said is the following. They said. The response of the somatic potential to an action potential coming in now. Depends on the state of this calcium variable that lives in the dendrites. Every time I receive a signal from another cell. I increment my calcium level a little bit and the cell actively pumps calcium out. So there's some sort of exponential decay of it anyway. And what we know is that the timescale for pushing calcium out is long compared to the time scale or the R C time. Of the cell in fact this we think is on the time scale of half a second where this is on again on the time scale of milliseconds. So what you have is you have a very slow build up of calcium because you receive signals in the den droids and as the calcium level builds up the increment in voltage you get for an action potential starts going down once you hit some threshold and then calcium sensitive channels open which makes the thing leaking. So if you like you can say that the cells are exchanging currency of information with your action potentials. But the value of a unit of currency of an action potential starts going getting small once you hit a certain critical calcium level. So now we have a system that actually has a chance to turn off again. So you can again go back to the picture of having lots of these amplifiers with speakers and. Microphones in the room and you can say Well first of all I can actually build up a lot of activity because one all the cells are in this regime here are all the gayness turned out on all my amplifiers. So I start producing a lot of noise in the room. However at that point every single microphone question. Threshold. Yes so I can rescale my c to put Sistar wherever I want at this point so I don't have all I really need is I need this to be fairly sharp. You can do a little bit of extra essentially numerical checks to see how sharp it needs to be and it's not actually that sensitive. The thing is that. Essentially we know these things are sigmoid old because most of these channels are essentially two state systems they either are open or they're closed and you essentially reach a type of Fermi function type thing for a population of them where most of them are open when you're in this state and most of them are closed in this state. So and there's some sort of threshold type behavior for them. And that is important. You definitely need this to be non-linear but it that the degree of the slope of the curve here and so on is actually pretty irrelevant for what I'm going to tell you. OK please please do ask questions along the way there's nothing worse than being bored of the talk because you didn't like some point along the way. So. So the key feature what I'd like to do is I'd like to take the plain old integrated fire model but now say the value of what I'm integrating is going down when I receive lots of signals and actually the key point is going to be it matters whether you get tired or the game gets turned down because you receive signals or whether the game is turned down because you send signals. I'm going to try to convince you. That's an important feature. So this is the basic model. And my claim is that it's actually physiologically motivated and as much as most neurons work hard to try to avoid this sort of then Drake non-linearity but our friends in the pre-bought singer complex don't do that. So let's do a what I call a simple mean field analysis let me say that all of that all the neurons are coupled to all the other neurons and now I have two non linear equations and I can numerically integrate them to my heart's content. And of course you'll find metronomic behavior. You'll see that in fact the neurons excite each other they start the potential goes up to where they're spiking a lot. However when they're spiking a lot their calcium levels in the dendrites go up once it crosses the threshold. They can't effectively excite each other very well anymore. Now the potential of the neurons collapses on an R.C. time constant falls very quickly to this sub threshold level they don't spike anymore. The calcium slowly decays falls below the threshold. Now they can start exciting each other again and the process repeats over and over and over. So if I reduce the problem to a system where I throw away all the details of the connectivity of the network. It's very easy to get this to work in fact if you do this. It's very easy to show there's no difference between whether the calcium level depends on how many signals I receive or how many signals I send There's no way to tell the difference in this sort of picture because what you've done is you've lumped the whole circuit into just one effect of neurons. If you do that. Of course you lose some of the fun. If you want to think about this in a little more detail in sort of simple according to stroke outs. You can actually ask yourself what kind of sticks points there's this non-linear system have and what you find actually is that this mean field picture predicts that if I had just as my free parameters the number of neurons I have and if you like the excitability the Basler excitability of a single neuron in other words how much of a voltage increment do I get when the calcium levels set to zero. Which is another way of saying that you in the real world you don't actually change the size of the action potential but you change either the threshold point for spiking or you change the basal potential of the neurons. But I doesn't really matter in my model which way I do it. I'm going to fix the basal potential and the threshold and change the size of the the currency of information transfer. So another way to say it is of delta v zero is a big number. It means a very few spikes Juran one to excite neuron two whereas delta V. not is a small number. I need many many spikes from neuron one to neuron two to excite it. So fine. If I'm in the regime with the sufficiently high number of neurons and what I'll call moderate delta v. I actually find a way to the this metronomic limit cycle. So I have a. Have a civics limit cycle around this fixed point here. These by the way or the milk lines of the potential curve in the calcium or adaptation parameter curve. However. If I go to small enough numbers of neurons or low enough levels of excitability this little piece here plunges through and produces a stable in an unstable fixpoint. And in fact this point what I'll find is that there's only one stable fixed point here which corresponds to a low level of potential and low level of calcium at this point I have effectively a quiescent system in the mean field where things are spiking very rarely but not sufficiently to get anyone else excited. So much like many of the students in my classes they don't really do a lot and they don't get terribly excited. However occasionally you know like I'll trip or something and everyone will get excited and everyone will start talking and that is this high activity fixpoint So in fact if I have a sufficient number of neurons and A actually medium number of neurons with very high levels of excitability I find only one stable fixed point now. And now I don't have any sort of limit cycle behavior I actually just have all the neurons on all the time so the options you have in this model are essentially an on off switch. That's being metronomic leaflet back and forth leaves the physiologically desirable state. This corresponds to not breathing at all. And that's all you know less desirable. And this actually corresponds to all the neurons firing all the time which is actually seen in experiments too. And there are again it's not physiologically desirable of course. So the question of now is this is a very simple mean field picture is it actually. Well here's the fact the phase diagram. This is the limit of stability for the S. O. phase if you're into the stroke got C. in language of these non of nonlinear dynamics and I suspect many of you are in fact you're probably better at the mean this actually is a type of second order transition where the period of the stable oscillations diverges smoothly as you approach the slide. And assuming you could adjust this integer and smoothly and in fact it's a Sneck if you like those words. This on the other hand is a sub critical hot bifurcation it's more like a first order phase transition where you go from either the stable the oscillating to the high activity face. And again my axes are numbers of neurons and this excitability or maximal excitability or maximal size of a voltage increment. Let's look at the stable E. oscillating phase a little bit more carefully. If we numerically integrate these equations for say a few tens of neurons using this model. What you find actually is that in the stable the oscillating phase. You can easily identify two populations of neurons. There are the neurons that Mr Feldman likes to call emergent pacemakers. And these are the ones that have a a voltage trace that looks like this it shoots up very quickly at the beginning of a pulse and then tires itself out with other ones talking to it and decays away and then the rest of the neurons essentially is to amplify that signal by coming up a little bit later and then drifting back down in fact you can see this sort of shoulder in each one of the pulses here is due to the fact that there's a subpopulation of these guys added on to this. The reason this is interesting. Well the reason it's interesting. Biologically is that there's been a question in this field about whether you actually can produce this circuit with every neuron being identical. The reason this is an open question. Isn't that the real world of the pre-bought singer complex. We know that there is a sub population of intrinsically oscillating neurons that Feldman and others who study this turn off biochemically which is where they add the drugs to shut those down. So there have been a number of models where X. something of a controversy in the business about whether. You need to have these intrinsic pacemakers or whether in. Some since they're a backup system. We know what they might be doing the strong version of the intrinsic pacemaker model or hypothesis would be to say the system cannot physiologically oscillate without some sort of input periodic signal coming from these pacemakers the alternate view which is Feldman is that the intrinsic pacemakers are in some sense a back up system and it is not necessary for the physiological oscillations the problem with that it is least previously as people said well if that's the case. How is it that we always see certain guys starting the process. I claim the reason you always see certain guys starting the process or at least that's what they believe is because not because these neurons are special biochemically but because these neurons are special because politike they're special because they're connected to the right people. So to try to understand that. Well clearly in our model since all the neurons are purely identical. It must be the case but can we actually do a better job of explaining that. And the answer is you can actually do a fairly good job in the following way you can do the following company. You can go through in a numerical example of say sixty neurons and look at all of them and you can color in the ones that are the Pacemakers they're the ones that shoot up first as opposed So for instance this blue one is a very good pacemaker where this orange one is and and you'll see of course this is a question. Yeah. So OK so in my model. None of the neurons are different. If you're a pacemaker in my view. It's because of features of your connectivity every neuron if you were to cut it out and put it on the table would be identical to every other neuron and why because it's my model I made it that way. Yeah. Now in the real world we know that there is a subpopulation of pacemaker neurons. And depending on who you talk to between twenty percent and eighty percent of them are pacemakers and again depending on who you talk to in the neonatal animal they're more active than they are in the adult animal. But again those things are being debated but you can turn them off in the experiment. Yes. Yes so what. So the fact I guess I should have told you that I'm sorry. So how did I make my network. It's a great question. So what I do is I take and neurons. I connect every neuron to every other neuron with a fixed probability. This is in fact a simple way of producing was called in air to shred a graph. So in fact you end up with these exponential distributions of numbers of connections so it's not like Laszlo Barabbas E. with his power law distribution is not like the World Wide Web in that sense. So in the World Wide Web guys that are really well connected are sort of extra really they're sort of too many guys that are really well connected. Whereas here you have this exponential tail of guys that are really well connected. And it's actually very hard in the pre-bought Singur complex to know precisely who's connected to whom. But what you can do is you can stick a needle in one neuron. And then actually make it fire and then stick another needle in the another and look for the voltage signal but it's a little bit like fishing if you don't see the neurons you stick one in you find a neuron then you fish to find which other ones. So from that essentially what you get is statistical information you go if I hit twelve neurons. What how many of them typically did I find the signal and the answer is that roughly each neuron is connected on average it seems to about a sixth of the other neurons in the system and the connection scheme seems to be independent of distance there is no sense of there's no real metric in the space. The whole system is a millimeter type object or last and the probability that neuron A is connected to neuron B. seems to be as far as they know independent of where they are in space. So. That's how you build the network now if you go in there and you ask yourself you can quantify what you mean by pacemaker and S. in the following way you can say who fired first who fired second every time a burst started and we'll call that the ranking. So if you're here you're a very good pace maker you fired first. If you're here. You got on the train at the very end of us when I was leaving the station. Now what you can do is you can calculate how popular a particular neuron is or how well connected it is this is of course a very old experimental subject which goes by the name of high school but you can actually do this. Mathematically a little bit more carefully. Which you can do is you can assign a eigenvector central of the ranking to every neuron in the following way you can take in the list of positive integers with positive numbers as put one on every neuron. And so consistently to find those numbers in the following way. Your popularity is going to be equal to the sum of the popularity as of everyone who is connected to you. And this because this actually if you think about it looks a little bit like calculating eigenvectors of your or your connectivity matrix. If you do it carefully your central of the ranking is a set is the eigenvector associated with the smallest eigenvalue of that matrix. Now if in fact we had a purely linear equation. This would have to control exactly who fires first the non-linearity seems to mess this up in a way that I don't understand very well so what we plotted here was the central of the ranking. So if you have a low central of the ranking it means you're the most popular. So we set it up this way so that the curve. If in fact everything was perfect. This would have a all the points would lie along the slope here of one. And what you see is the central of the ranking does a very good job of predicting the laggards the guys who are the guys that do take part here seem to be more spread out than I can fully understand from this and it has to be because that. If I want to say that the pure the eigenvector central of the controls everything then in fact I have to have a linear system of equations and that would mean I had to linearize this firing rate which is clearly nonphysical. So I suspect the reason this fails here. There's still a strong correlation with the fact that it fails here in detail must have something to do with non-linear interactions. I don't understand right now. I should mention in passing that if you are Google you use eigenvector central of the two week pages as well. So. Now you could say. So you've been simulating this network Alex. How about you tell the phase diagram actually look like for a real more real numerical simulation as opposed to the. Your field theory and the answer is roughly speaking you get what you want if you have small numbers of neurons and what not to much excitability there quiet. What else would they be. If in fact you have a sufficient number of neurons and they're fairly excitable. You get stable the oscillating. However the phase boundary between the stable the oscillating regime and what the mean field theory predicts to be the high activity phase does not look like the mean field prediction at all. In fact if you quantitatively compare this line calculated numerically to our mean field prediction for the boundary between the stable eos elating and quiescent phases you actually do pretty well the blue line you can see is horrible. This is you know at least in a way this is the C. plus at best and the interesting thing is that as in you usually the errors are the most interesting part of it look at it it isn't that this curve is was drawn jerkily there are actually steps in this. And in fact I'm going to show you that we can predict where these steps occur by looking at purely topple logical features of the network which have been law. Lost when you do the mean field theory. So what's the feature that we're looking at here. What are these magic values of N. at which the phase boundary seems to get hung up and of course I'll come back to this magic numbers and really give you the answer I can't keep a secret or tell a joke properly as I always give away the punchline. The answer is the cake course which I will try to explain now. So we're all familiar with percolation theory at some level of I take a bread board and I start putting in resistors at random at some critical density I will actually find a conducting pathway of resistors connecting across my system even in the thermodynamic limit of really big breadboards and lots and lots of resistors. Mathematicians in the ninety's it appears and perhaps late eighty's cells I should say generalized this idea to saying what it really means if a percolating core of the system is that you are connected to at least two other guys two other nodes who themselves are connected to at least two other nodes who themselves are connected to two other nodes ad nauseum. So if you take the cursive definition you cannot just talk about percolation with a two core you could talk about three cores or seventeen cores you can say let me isolate the piece of a network that is four which the nodes are connected to say seven other nodes who themselves are connected to seven other nodes. So the way I like to think about it is those are the hubs of the airline system if you like. So if you think of airports as nodes and flights as the connections to the edges of my graph. Then you can say L.A.X. and land. And Chicago and a few others are part of some sort of global thirty core Why because you can fly from Atlanta to thirty other airports from which you could fly to thirty other airports but if you're in Santa Barbara. That's not true. You can fly to L.A.X. and you can fly to San Francisco and they're probably part of the thirty Corps but Santa Barbara isn't as. It's not connected to thirty other airports who themselves are connected to thirty other airports. So if you want to actually look for a particular core in a network. You can follow a simple algorithm and I think actually explaining it helps you understand the idea of a cake or. The algorithm goes ascent. As follows. My initial network circle all the guys that are connected to less than three. So will look for a three core. So here we go. This guy's connected to one no good. This guy is connected to two no good. This guy is connected to no good. This guy's connected the four. But we'll see what happens to him in just a minute now remove all the guys that were not connected to three. Now you have the system here. Now go through it and do it again all of a sudden this guy fell below the threshold because the two of his friends that he needed to be connected to for vanished. So now this guy vanishes and this process could continue multiple steps but it wouldn't fit on a slide. So in fact this is the remaining three core of this original network. So in some sense the hub system of this network I conveniently drew on the top. Whereas these guys are somehow more peripheral to the network than the rest of them. What's interesting about that in the thermodynamic limit where I don't draw little things like this but I imagine letting the system become infinite you can ask yourself at what concentration of bonds. Do I lose a particular cake or and in fact it's a phase transition but it's sort of a funny one. If we define the order parameter of this just as you do in percolation to say what's the probability that a node chosen at random is part of the percolating cluster. Well we know when there you can't span the system you will actually have zero probability of any node being part of it but what you'll find is at the transition the probability grows smoothly from zero in a continuous phase transitions. However if you go to a three core not the two core which you find in fact is that the probability that you are part of the spanning three core is. Actually goes has a discontinuous jump to zero at the appropriate density in fact what's unusual about this system and something which I don't fully understand I'm not sure anyone else does either at this point is that there seems to be universal scaling behavior here but a finite jump zero. So it has the sort of mixed character and it's something which I think has been seen through simulations but is not been well understood in sense an interesting stat my problem. Frankly. But not quite what I want to talk about today. What matters for me right now. Is that there's a discontinuous jump in the probability of being part of a cake or and that jump actually gets to be quite large. When K. is large. So in fact if the stars and show it here but if I look at the network and I ask where it is the what does the probability of being part of the seventeen core look like. Well essentially the probabilities are almost one and when I have such a seventeen core is the network so multiple He connected. Everyone's connected to at least seventeen other guys who are connected to seventeen other guys that probability of picking a guy random and it not being part of that is just a few percent. Now I start killing off nodes you could imagine I start fogging in airports around the world and at some point I find out that the entire seventeen core collapses there are no more seventeen core nodes left but what I'll find is that the probability just before the collapse was something like eighty five percent. So I go from almost everybody being part of the seventeen core to nobody being part of the seventeen core and all that does is it requires one more node to be removed. So in fact you can kind of see that even here. You kind of fun to play with if you kill one more guy here you'll find out that the three core vanishes. I think. I'm not sure if it does on that one but here I'll show you an example where it does. Here's a little test case let's look at the five core. So since it doesn't matter where the network it where the nodes are I put them on a circle. By the way this is a. A network graphing package which comes out of Bloomington and it's very nicely set up you give it a connectivity matrix it will draw these pictures for you. So here we go here's a little test case I forty three nodes and the red ones are part of a five core and you can see most of this is red on the boundary. There are a few guys that are part of the four core which are peripheral now they're in blue and there's actually four guys that are part of the three core. Now I kill off one of these at random and what I find is that there's more for core but there's still a five core. Now I kill off one more guy and all of a sudden all the red ones or everything is part of the four core with a couple of three core ones. Now I kill off one more nothing particular happens why it just so happens that the network I had had a magic number of forty two which is of course makes for a good story given hitchhikers and such. Now if you want to get rid of the four core you might have to track down to thirty six and then all of a sudden all the blue ones will vanish and you have nothing but green ones and perhaps some to Cora left over and something like that. So what's going on here is that because of the first order nature of the transition you get this. Even for a finite size system you get this discontinuous collapse of the entire system going from over half to none in one step. So if you go through and take our networks and you take one network of our pre-bought singer model and you start killing off neurons which you'll find is that you'll cross over from a stable E. oscillating phase like this to this high activity phase where you have all kinds of complex dynamics were different neurons are firing at different times and as you cross this line you can map out this boundary. Now you do it again a different level of activity and you'll do it you'll get the same thing or you'll get a slightly different thing. And now you can go in and purely top logically say let me take this sort of analysis figure out where the cores vanish and I'll put those lines on the graph separately so. The dashed lines here this dash line might be the disappearance of the twelve Corps. This might be the eleven Corps This might be the ten Corps I don't have the numbers in front of me currently do this one is the nine Corps Apparently so. In fact what you see is that there's a very good correspondence between where the horizontal lines are and where you lose the course. Now if you redo the network with a different connectivity for a small sample you will find that the magic number will shift a bit. But then when you go calculate the phase diagram for that one you'll find that these boundaries also shift a bit and they keep lining up so this is an example of a general effect. Now. What's going on. Why is it that the cake or is matter. Well let's think about the problem physically a little bit. You have all these active neurons firing and how is it that you can synchronize to turning off of all of them what you need is you need the calcium levels in the dendrites of all of them to reach this threshold. So that they can't receive signals efficiently. They're voltage relaxes and they sort of reset and everything starts over again the way you end up in this chaotic high activity state is that some of them reset and others keep going and then others reset in the other ones keep going. It's a little bit like if you have a riot and two people sit down because they're tired. Well they'll get sort of reenergized they'll keep rioting it doesn't turn off the riot. So if you want this to work. What you need is you need to have it so that when guys turn off they turn off in such a synchronized way that anyone left over can't really talk to anyone. So the key feature is if you pick two neurons at random that are not part of your take your highest K.-K. core the probability that they can actually communicate with each other without passing through the cake or is vanishingly small. So it's a little bit like saying if you want to fly from. Champagne Urbana to Paris. You're almost certain to fly through Chicago or New York or Atlanta or one of the other. Hi K.K. core networks. So if I were to shut down all the big airports. You pretty much can't go anywhere anymore. So what you need is you don't need everyone does synchronous turn off. Turn off in a synchronized fashion but you need the entire cake or to turn off in a synchronized fashion so that increase the level of excitability. I need to be able to drive the calcium levels up in my highest cake or neurons sufficiently. So that those guys. Is there the one connection so they receive the most signals so they'll be the ones that get tired first. So if in fact for a given level of excitability I have a sufficiently high K. take or left then I can shut down communication in the network and then the system can reset and those guys on the fringes of you like can fire away for a little while but then they'll get tired because no one is well they'll Excite is no one's talking to them anymore. No one's exciting them. So if you shut down the cake or it appears you essentially shut down the communication network of the system. OK I see that I am running out of time. Let me give you a very brief vignette of a slightly of the same model wreaths retold different with almost the same model. If we switch from our brain stem to the cerebral cortex the part where you actually do the thinking. You've in doubt that the majority of the neurons are these excited Torrie neurons with a complex X. on going off in a dendritic tree. This is a pyramidal neuron and there are a few intern or ons which are inhibitory So now we have neurons that actively D. excite the other neurons so we have a network of guys with ferromagnetic couplings an anti for a magnetic couplings. What's interesting about the system so schematically you can say I have excited Torrie neurons that excite themselves and excite inhibitory neurons even have a Tory neurons that inhibit the excited Torrie neurons and actually inhibit themselves. So I can build a slightly different schematic picture. What's amusing about this is that if you have an anesthetized animal. So presumably it's not thinking about very much. And you actually look at the potential in its neocortex you'll find that it actually has a periodic metronomic like behavior. This is data from my colleague at U.C.L.A. Maida and what you see here is that he's stuck eight. He's actually essentially tapped the phone line of one of these neurons in the cerebral cortex and what you see is its potential rides up rides down rides up rides down and the reason it's doing so is actually there's enough firing from other neurons in the system that the whole of the tension the whole ground level is floating up and down a bit but the actual neuron is firing just rarely it fires one spike or nine per what's called an upstate. So this is a down state oscillations you can actually measure these things outside your brain with an E.G. so that you actually get this sort of brain wave pattern. So the question is this come out naturally from a very simple model and why do I want to include dendritic shunting the reason I want to include dendritic shunting is that I want to shut down the network when a guy is firing only once or no time at all. So it can't be that this neuron decides to D. excite because it's fired too often because it may not have fired at all. However these pyramidal neurons have on average tens of thousands of synaptic junctions on them so they're sampling a lot of incoming signals. Even though they fire very rarely. So if in fact the system somehow deciding it's time to dx site. It can't be because of the signals it sends it has to be because of the signals it's receiving. So you can do exactly the same sort of analysis of a mean field picture and what you'll find is that within the mean field you can use the same sort of the dreaded chanting ma. To explain the oscillations. In a sort of musing way which you'll find actually that you now have three differential equations one for the potential of the excited Torrie neurons one for the inhibitory and one for this adaptation parameter which we think is a which we think of as calcium if I fix the calcium at a sub threshold level then in fact what I have is I have a fixed point corresponding to the down state and I have a fixed point corresponding to the up state separated by a saddle point in between them. Now if I start increasing the level of calcium what happens is the. Which guess. So if I give I increase the level of calcium this unstable fixpoint will ride up the curve of the line here. And actually annihilate this the upstate fixpoint So the system has to fall into the down stage. At that point the calcium levels decay and this is actually a new saddle node split. Happens and I produce a fix point. Well I didn't drop for you. So I guess I won't tell you but essentially we can think about it is that the up down state oscillations within this model occur because this unstable fixed point. Alternately shuttles back and forth to annihilate one or the other fixed points and it does so in a periodic fashion. So in fact here it is. I don't want to wear out my welcome. So I want to just hit the main point here. If you look at this result. You'll find that it's actually not so good. The reason it's not so good is that you see this ringing in the upstate. There's no evidence of this sort of high frequency ringing going on in the data. So you might say well maybe that's the end of it and why is there is sort of ringing. Well if you look at orbits near the substate fixpoint they spiral into the fixed point in that spiraling. Setting this frequency for you. And the reason is that this fixpoint is very unusual compared Well it's certainly different from the pre-bought singer one in the pre-bought singer when you can think of it as that as a pencil that flops from one side to the other side has a stable point lying This way you throw it over here it's stable here you throw it back over here and so on. Here you actually have something kind of amusing happening. What I didn't tell you is that the potential at which you are stuck here is actually corresponds to the high susceptibility point the point where the firing rate is changing most rapidly with potential. So the system goes to a quiet state in the down state throws itself up into a state of maximum sensitivity to external signals and falls back into the ground state again. So it's as if the pencil balances itself in the state at the top for a while then falls back. The reason it does so is that you actually have excited Torrie neurons that are exciting inhibitory neurons which are then exciting the excited Torrie neurons and the system is tuned in such a way that it actually reaches with some pullin Skee calls it chaotic balanced state. So with the dendritic chanting We can go from this some pullin ski style balanced state here of high susceptibility back into a quiescent state and then back up into this funny balance state. And the reason is that the dendritic shunting weakens the effect of the excited Torrie neuron so then it falls back off. So it's like you balance a pencil on its tip but because the you've been tilt gravity just a little bit. It always collapses the right direction for you if you like. So let me skip this turns out that there's chaos and all these things which I don't have a lot of time to tell you about. But let's do the following it turns out that. The up down state oscillations in your cerebral cortex are remarkably synchronized over the whole thing. So and in fact we know that excited Tori neurons these pyramidal neurons have a very long at. Exxon's that connects millimeters away. So in fact there is the possibility of long range coupling. So let's take the simplest possible model. Let me imagine I take one downstate oscillator unit and I weakly couple it to another one. Now let me tune the strength of this coupling and ask Is it possible to tune the strength of the coupling in such a way that the Global up down States are synchronized. But all the sort of oscillations the ringing in the up states are decent criticized. So to look for that we can compute the correlation function of the potential in one guy in the potential of the other guy and what you'll find is that if I turn the coupling off. Well they're not correlated at all that makes perfect sense. I have up down States in my independently. Now if I tune the coupling there's actually a window of about an order of magnitude which if I had more time I could explain why. Where in fact you get this partial synchronization. And if you look at it. What happens is all the states in all of these up down state modules turn on together they have non synchronized dynamics in the up state and then they all collapse together. So it's actually a way this is I believe this is essential because if you actually want your neocortex to perform computations. You can't have complete synchronization of everything. Otherwise you just have one circuit element but which you seem to want to do if you want to be able to go into this up state do something non-trivial and then go back into this down state. In fact if you don't. If you mess chemically with the system so it doesn't go back into the down state you end up with things that at least electrically look like epileptic seizures. So we can talk about that at the end if you want but the fact I'm very close to the end. Let me just review the main point the only thing I've added to these models that other people have talked about. It is what I would call multi compartment nature of neurons with dendritic shunting So rather than saying that a neuron is some sort of nonlinear processor of signals that it integrates up. I'm saying. Actually you have to take into account the dendritic tree and note that signals coming in interact with each other. Non-linearly through this adaptation parameter calcium. Now that actually leads to a much more sophisticated question what is stead of looking at spiking rate models. What if I actually followed action potentials the spikes themselves. Is it possible that the topology of the dendritic tree actually in hard wires in a particular signal processing algorithm. So if you go back and look at what people have done in neurophysiology for one hundred years but what you see is that they're very they're vastly different sort of connectivity strategies here you see that you have a whole bunch of long lines all connected at one base level tree here you have something which is this highly ramified sort of like beta lattice. So is it possible that this sort of structure in codes a different sort of integration of signals in this sort of structure. So if fact if I have this sort of structure a sit ups down at the bottom has because of shunting has the power over all the information that occurs upstream from it. Now here you have a much more democratic integrator because if every synopses on it. If every sin apps is on a second a separate dendritic process. Then in fact you really do have a point model integrator. But. Simple that neurons are actually using the structure of the dendritic Arbor to do. Non-trivial computations and is the idea of a single neuron being particularly simple in only complex networks of neurons thinking. Maybe oversimplified in itself in fact maybe the right picture. Is that neurons are endowed genetically. With some sort of extra non-linear signal processing long before it makes a decision about whether to fire or not. So I guess an I should think my collaborators next but let me just say that. I would never have heard about the pre-bought singer complex of Jack Feldman hadn't given a physics colloquium at U.C.L.A.. And up down States likewise with my Maida Robin and I have been collaborating on this project and a former graduate student David Schwab did all the work on the K. course and he's now working with Bill Bialek and company at Princeton. So I'd like to thank you for your attention and take the question if I may thank you very much. Yes. You know the fact that's a really interesting question because it depends what kind of animal you look at you might say that we're smart so we should have some clever connection scheme and lobsters are dumb because you know we eat them so they can't be too clever. So they must not have a very clever connection scheme and you have it exactly backwards. There are many people who study central pattern generators who look at things like lobsters and mollusks of various sorts and what they find actually is when you have very few neurons like a lobster. You actually have a genetically programmed connection scheme so that fact the circuit diagram is apparently prescribed and each neuron has very different properties in it. So you actually have a system of there is a central pattern generator that makes lobsters stomachs grind up the food properly. So there's a pattern of the muscles that you pull and you know you that contract to grind up lobster chow. So. So this is the so this circuit of like ten neurons or twelve neurons and each neuron is fundamentally different in its input output characteristics and they're all wired the same so every lobster you fish out of the tank has the same connection is far as we know it's not true for higher mammals or us or anything else. Our pre-bought singer complexes as far as we know. Are. Well described by saying they're randomly connected. Then it's possible we just don't know but one thing you can think about is the following if you have lots of synaptic junctions and you have lots of neurons. There's actually not enough genetic material in your body to encode the entire circuit diagram. So it's if you have enough complexity. You can't actually prescribe who's connected to whom anymore in a reproducible way. One question people have asked in the past is if you believe in dendritic shunting is it still possible to create a network that will learn a particular signal. Pattern recognition program or something and the answer is it can. At least there are certain patterns in fact it can do something that's kind of clever a summer student and I worked on this a bit and I think we understand the basic idea. But you can actually build small networks of neurons which people like Minsky and others call perceptron but the word goes back further. These things can essentially act as logic gates but they can actually with shunting do a funny type of time ordered logic. So if you take in two bits of information in A and B. you can have an AND gate if signal A comes before signal B. but it could be in or gate of signal B. comes before signal way. So you can do some sort of clever things like this and in fact you can build circuits that will do three bits and four bits with time ordering as well. So there are a couple fun things you can do if you prescribe the system very carefully. But what's interesting is that at least some of these things are learned a bull using. In learning rules for instance. That's maybe more than you wanted but it's the right answer and the rest of it to. Questions. Yeah. Yes So the receiving the signal is somewhat stochastic the firing is apparently quite quite robust. So if I take the neuron and I raise its potential by hand. It will fire a nice clock. Or the fire can roughly with some fixed spiking rate as far as I understand now the reception of the signals in some cases is actually so when you actually have an action potential going down to the synaptic junction. There's only some probability that enough neurotransmitter will go out to actually cause the signal to reach the post synoptic neuron and there we don't we don't take that into account but it's certainly something you can think about. I should say that. There's someone else in the physics community whose thoughts in particular about the pre-bought singer system. Paul Bresso Hoff and his collaborators now at Oxford. Have built a model where still casts this city is very important to their version is that you have a quiet fixed point and noise actually throws you off of that fixed point and then you go around some long limit cycle and then fall back to this fixed point essentially. So your you have a fix you have a slight You have an almost stable limit cycles sensually So when you point you have to kind of wander around and then fall back into it. So the upstate is wondering around. And that's a perfectly good model for the pre-bought singer complex and as much is like our mean field model it will give you roughly stable oscillations if you have enough noise in the system to make that work. The thing which is important is whether in fact. The thing I think is important is whether in fact as you change the number of neurons you actually see these plateaus in the phase but in the phase diagram Jack Feldman is trying to do that experiment now so if all is right then. In fact one of the key six years will be these cake or steps which was not included in these it should not come out of these other models. I should point out that even if you change that the neurons get tired because of their own firing. Then you do not get the cake or steps either. So in the sense of statistical physics. I believe whether you whether your dendrite gets tired or whether your SO MY gets worn out is actually relevant perturbation and gives you different collective dynamics. Yeah. The these the strength of the couplings. Now I'm really now out of my depth but my understanding is that they don't change. So this far as I know this is not a system that is present actually there serve it. There's a neonatal training period apparently. But how that works I don't know anything about. And if you want the full story. It turns out there's two copies of this oscillator one that does X. Aleisha and one that does inhalation. And they seem to interact with each other in that the nice the interaction changes in the neonatal animal in the actual breathing animal. And like right now because you're not running around you're not actually using that acceleration circuit but it matters because opiates turn off the pre-bought singer circuit but they don't turn off the excellent one. So it's actually possible that sort of rescues you know it's the thing is the reason why the Rolling Stones still are around. Or. They leave the membrane high finance. You know mystery and red blood cells. He's also with us. One kept us for two more days because I'm leaving Friday early Friday and although the schedule is relatively full There are couple slots open here and there. And if you would like to see here. Please send e-mail systematic a Georgia Tech school physics and with that very much. Thank you.