[00:00:05] >> My pleasure to do for you to. Read told from her to her. Friend is a serious and you know when I was thinking of it. Would this makes a really good theory you're you know you're a scientist like maniacs I was trying to work techniques. And really here's what I was trying. [00:00:35] To sort of for a little. Bit about what what makes a good theory. And what brings us to well is attacking the problem from the right so I hope to get the best of a people about the appropriate scale of our problems in the. Form of down to the details and really work to rest on it and I hope why we're all. [00:01:05] Well I also would urge. You to work there's also a well thought through deal which is right papers theory there are protocols. And so I think you'll find that they are. So. So grand as a warm background I didn't actually notice in the room all the time is that. [00:01:34] The story they are from here is a lot of it is going to detect it. And I tell you this often well we're works like this since. Then to meet you and why you were here for so long for work with John Riddle and Alfred. Of these individuals. [00:02:02] And tennis has been. Were you. There. Ages before. He pulled those. Were the men somehow homeless or or. Well has been. And everyone was. A little bit surprised. Me Here's. Where. I'm going through some of the brick but one of my bosses right. The way. Through story is so I says You. [00:03:03] Question. Myself here. With nothing to do here so it's my 1st I haven't been to Atlanta since S offender was here so it's great to see how the city's changed is great to see how vibrant the neural community is here sort of that at many different scales it's great to see a collaborative spirit between experiment and theory this is something that I try to bring to my own lab and it's nice to see that spirit in a large community like you guys have consider yourself fortunate so today I'm going to. [00:03:39] Talk about a problem I think a lot of us are thinking about. Like Garrett alluded to we're sort of drowning in data there's kind of Comix data that's giving us this really rich view of how the wiring the cell types in the wiring work within cortex and other brain areas and I think a lot of us are working towards the goal of trying to take this kind of poll mix information and. [00:04:04] Right to figure out how that drives large scale population recordings that have their own life now and are picking up all over a whole over the globe here so this idea of going from sort of structure to function is not new by any means people have been thinking about this and thinking about this for understanding the nervous system for a long time this is a paper by Ashby in nature 962 it's a fun paper it's only a page and a half so if anybody wants to read courage you 2 and this to my knowledge is one of the sort of one of the 1st papers that really sort of the ask Can we take the way we think the brain is wired and predict dynamics in the brain and what I did is they took a bunch of binary neuron models so the neuron is in a low state or high state her inactive state an active state and it transitions between the states if the number of inputs it receives is above some threshold this neuron receives input from a bunch of other neurons There's end of them and they close the loop so here this there's the given given the inputs are some probability of a mission and then they're going to close this loop back and they wrote down sort of the mean field theory model of this collection of only excited Tory neurons here and they made the following observation they said OK if we initialize our network so P. Is that an amicable variable if they want to know whether or not the system could live in a state where a fraction of the the network is active right and it can live there there is in principle a fixed point in the system so around 42 percent of the network on but unfortunately it's unstable if you initialize the system with less activity than that then the network dies and if you do it with more activity than that the network blows up. [00:05:53] And in their paper they had sort of the following conjecture Evidently there must be factors or mechanisms that stabilize the network rather than just having the integrating threshold so it's a sort of funny now I think a lot of us already know the answer this question actually 962 people knew the answer this question it's really inhibition right and people have seen hard line Radcliffe had seen inhibition in the fifty's here this idea. [00:06:22] Permeated in the theory community actually the very next year somebody corrected them Griffith but here I'm looking at Wilson and Cowan. They put forth a model that's now relatively well known and well used where they think of a population of excited Torrie neurons and inhibitory neurons that are by directionally coupled here writes of this coupling between the excited 3 neurons in themselves the inhibitory neurons in excited or nervous and so forth and they want to include inhibition because they basically said look there's lots of evidence for inhibition in anything that's interesting in the brain is going to be this interplay between oxidation inhibition this was sort of a philosophy motivating philosophy that Wilson and Cowan had. [00:07:07] And this is the way that they like to think about their work this is my colleague at University Pittsburgh Burgerman trout he found this in his desk one day it's a picture of some Whenever a theory's puts down a pair of differential equations what they want to know is what are the zeros of this function so what are the sets of our even our values that make the dynamics of our our easier That's called a null Klein and that no Klein for the East population in our our i space is. [00:07:40] This cubic no Klein here and the same thing is true for the ice else you get this cubic no clients and the intersection points here will be where both dynamical processes have 0 velocities that would be a fixed. Point of the system and the reason looks so funny is back in the late seventies monitors really were around so they had computers and they borrowed and John Russell projected this too in a civil scope and had this kind of cool cool output and I think that's interesting mainly because theorists really haven't moved past this description in the last sort of 40 years there's lots of us that still rely on this model to understand how the brain works so one thing we know that when you put inhibition into a system you get a stable fixed point So here here's that no climb structure and sort of a moderate view and then you get this stable fixed point here so if you have initial conditions that are near this fixed point the system will collapse and you're not going to blow up to infinity or die like the Ashby model inhibition saves the day. [00:08:41] So people have used this basic no claim structure this 2 dimensional framework here to propose a lot of interesting aspects of all cortex so so one of them is that there's sort of paradoxical effects of an external drive this is Ken Miller and me just subjects that there are there are so-called inhibitory stabilise network and the idea here is that instead of a push pull dynamic in the brain that inhibition suppresses excitation there are the fixed points if long as the intersection lives on this middle branch then if excitation increases it's in the U. so to does inhibition right there there are there are the fixed point for both are going to be tethered together so they go up and down as a unit which is paradoxical when you sort of think about push pull inhibition right push pull would be if you're intersection is over here or over here then you've got an X. and now one inhibition gets higher excitation goes down that's not true when you live in a little branch and there's lots of evidence to show that the cortex might live on this middle branch area. [00:09:43] So nother 100 other triumph of sort of models have been work that was sort of initially proposed by shell of a new some but really Karl the music and. Took it far in the ninety's where they talked about balanced excited Tory inhibitory networks where the amount of excitation that a neuron received was huge and if left unchecked the system would just fire it crazy rates and this is dynamically tracked in cancelled by inhibition so that the total input that a neuron receives is is around threshold but the variability can be quite large and that variability translates into sort of erratic regular spiking for a neuron and the no clench structure in these models becomes a linear effect of a failure of balance networks is that in the term anomic limit these are linear models and I think a lot of us like to think of cortex as having something on your property. [00:10:36] And sort of the last last areas thinking about rhythms if the time scale of inhibition is longer than excitation then you can destabilize these points and get these cyclical limits cycles so you get you'll get oscillatory behavior in this framework to be more like damn oscillations which have been well documented in the brain and people like nets a capella Nicol Brinn owl and judging Wang have made early career sort of thinking about how networks of E.I. neurons can produce rhythmic dynamics or. [00:11:04] OK So so that's talking about inhibition and stability so inhibition stabilizing a network in producing variability is something that I think a lot of us think inhibition does another thing people think inhibition does is control the gain in system so here's some evidence that gain is definitely modulator it so here is this classic experiments from Sklar and Freeman where they're looking at orientation tuning the visual cortex of a cat here and they're changing the contrast of the visual scene and you get this this multiplicative scaling of the tuning curve so this is contrast invariance of orientation but what what what's happening with you get this overall gain change. [00:11:47] This is true for sort of bottom up this all the changing of the contrast of the visual scene we've also seen sort of multiplicative scaling of orientation tuning in in experiments where attention is manipulated so this is work from John Monson's lab where now there is some presumably top down process that pushes attention into the receptive field of the neuron that's being recorded and when that happens the tuning properties of that neurons are multiplicative scaled so to gain control is a nother thing we think of cortex cortex isn't just stable it also is malleable in terms of its gain and people have been thinking about inhibition being a large player for that for a while so work from Christoph talk. [00:12:28] Really sort of started to think about the nervous system in sort of these linear frameworks we're Ohms law at the level of the voltage which we know is true propagates to firing rates which is later been shown to be false but the idea still holds that basically what inhibition is doing is it's changing the overall shunting in the system and that'll sort of divide and puts And that can give you gain control in your on your outputs. [00:12:55] Larry Abbott and others have worked on a controlling gain of a neuron in and then this in this case this was an. Dynamic plant set up it's a sort of open loop framework and by controlling the amount of fluctuations balanced except authority to Tory fluctuations the neuron experiences you can go from a high gain regime where the neuron sensitivity to a driving input is high and as you start to crank up that noise then the gain goes down right so an inhibition is that it's a clear player in this here so inhibition to control the gain of responses not just stabilize networks. [00:13:32] And sort of the last thing I'll talk about for inhibition is our the cortex one thing we know the cortex is quite variable and that variability is distributed across neurons so there's some shared variability over a population so this is come to head now that we're recording from simultaneously from large groups of cells so this is an example experiment from Adam Cohen and Matt Smith where they're basically looking Each dot is the trial response of one neuron plotted against another neuron for that trial you get a scatter plot when you plot all the trials and there's some eccentricity here the correlation coefficient for this particular examples point 24 so the brain is correlated that correlation is so here's a review I published a couple of years ago looking at a lot of different examples of correlations in monkeys and in mice as well and even my old electric fish probably did leave that out and what we're looking at here is the system in one of 2 states either the the network is in an unintended state or an attendant state it's in a lower arousal versus a higher state there's a tree untrained task versus untrained response which is a trained response and in all cases I'm plotting the ratio of the correlation coefficients in state be overstated a state B. is chosen to be smaller here and we see that by changing different states correlations are quite valuable So not only is the brain correlated and it's responsible the variability is shared the amount of shared variability that is seen is something that is plastic and can change right depending on state So where does inhibition fit into here so. [00:15:16] Some work from some really nice work from a funds or an arts and. And sort of these balance networks that are densely Wired So in the in the previous incarnation of balance networks from. Music they ignored correlations by just seeing the cortex. Sparse the connection probabilities epsilon shared inputs proportion of shared inputs that is epsilon neglect epsilon this is an ideal gas model of the brain correlations who cares but we now know that cortex is quite dense in its wiring expression in terms of how excited Torrie neurons project to inhibitory neurons connection probabilities for neurons that are close to one another can be point $5.00 even right that's that's not that's not a sparse network so it wasn't clear how do you get correlations. [00:16:01] To be so low correlations are quite low in the brain in many states correlation coefficients are point 055 percent but the connection probability is 50 percent right so what's what's the magic How does this work how do you get a system with lots and lots of interactions to look like it's not interacting that's hard problem so what they said is well we can use balance theories and we can say that OK if we have a pair of neurons they're going to have lots of correlated excited Tory inputs just from shared excitation so than that in that neck case there that's the green curve that's the total correlation from excitation going into a pair of neurons same thing for the correlation. [00:16:41] Inhibition into these 2 neurons there's some feed forward input that's in blue so there's lots of sources of correlation in the system but if we look at how excitation into one neuron and inhibition into another neuron are correlated we see there very anti correlate in the reason they're anti correlate because inhibition has a minus sign and that's all it is there's a minus sign. [00:17:02] And then you get large correlations between excitation and inhibition across pairs you put it in a pot you stir it up and you get really really tiny correlations So that's that's this idea so it's balance balance excitation inhibition that cancels at the scale of CO variability not just at the scale of the averages. [00:17:22] And that's how you get very very uncorrelated responses or weakly correlate responses inhibition plays a huge role in this and previous to that like all good theories somebody already did the test. For them 2008. And they absolutely saw this is a long lump all they did wholesale recordings in pairs of neurons a very very heroic test this is an invisible test and they saw that absolutely. [00:17:47] Inhibition was correlated excitation was correlated and inhibition and excitation weren't according So this is sort of proof of principle but so what I said I said inhibition stabilizes networks inhibition to control the game of neural responses and inhibition is important for controlling or determining variability in a system and when you're a model or you're trying to get inhibition to do all these things and you're doing it with and model that only has one class of inhibition I often feel like I'm Houdini I have a lot of constraints on me and I have to sort of Xscape this very difficult. [00:18:25] Very difficult situation here and sort of the goal of this talk is to say that I don't want to be Houdini anymore experimentalists of told me there's lots of classes of inhibition and by diversifying our notion of what inhibition is and how inhibition is connected then I don't have to play this game we're one concept of inhibition has to do all these things but that's the central goal if you want to leave now you can tweak that's the point. [00:18:52] So so what why why am I concerned about this or 1st I want to show you that in that in networks that only have one class of inhibitory neurons that these are fundamentally tethered together there's these chains that tether these things. So I'll do that with an example from a collaboration in my group this is work in joint with Marlene Cohen she's also at the University of Pittsburgh and Marlene does multi electro recordings in the visual system of primates during attention modulating tasks she did her post with John months will and this initial slide is from data collected when she was with John model and she has a task where a primate. [00:19:29] Non-human primate is fixating on some dots and then there's a drifting grating stimulus given to the fields here and there's a queue of locations so there's a the animal's cue to say pay attention to this grading here because at some random time it's going to change orientation and you have to report when that orientation changed right and we can look at neurons who are in the receptive field of this of this input that's cute and those neurons are said to be the intended state right those neurons are now there are the the animals been directed to pay attention to the reset the field or we can look at neurons who are on the other hand the field and they're getting a visual input but they're being to the animals being told to ignore that visual input so those neurons are in the unintended state the animals always attending It's just whether the attention is within the reset the field of the neurons we're talking about and what Marlene and many many others have seen is that you get this transients that's interesting but in this sustained response in the attended case the firing rates gold go up a bit and what was new here is this trial the trial shared variability goes down with the tension so even though the system is more active the neurons look more independent from one another the noise correlations have gone down and this is a ratio any time you see a ratio change you should say is it the numerator the denominator in this case it's the numerator the joint variability between neurons that's what's going down right if anything the very the variance is going. [00:21:04] Slightly here so it's really that there's this active decorrelation between neurons or so it's kind of a grad student my lab can assure She she built sort of a classic Wilson Callen style model. Of cells there are some shared fluctuations and we're sort of in visioning some top down intentional signal here and we're envisioning this as a change in the overall set point where we're still assessing offers a static input 2 to neurons and this static input is going to be biased towards i cells and why is that there's there's evidence that that particular P.V. cells are more sensitive the Koehler to modulation than other neurons and this is an example from his group we're actually all the different classes of interest is a slice experiments where they dump on muscular in here and they see that all the overall the polarization of excited Tori's cells doesn't really care but all the different classes of inner ons they all tend to depolarize in this situation so if we want to think of sorts of top down input offering a colon or just modulation to the system we might think that's bias to I suppose in this case so our model is going to be sort of one of these Wilson Cowan style models where I have some attention modulator bias these are just static terms here here's my my coupling between my my network and then here's some fluctuating term that I put inside the brackets here but I want to linearize the system anyway so it will pop out. [00:22:35] And then what we're imagining attention is doing is sort of dragging the system from living in sort of one state to another state so even though I'm going to linearize the system the point at which I'm linearizing is going to change I'm always going to analyze the system like a linear system but I'm going I'm going to have attention change the overall slope of the transfer function so it's truly experiencing the non-linearity. [00:22:58] And when we do that we can basically account for scenarios where firing rates go up it Tory rates really go up but the overall population variance of this is the some the variability of the population which is is proportional to the shared variability amongst the cells that shared variability goes down here these are sort of no plans you have high variability here actually if you ever live up here you're unstable the system blows up systems very stable over here and we have a path that Byron rates can go up and nevertheless variance can go down that's the population variance along that path and the reason why population variance is going down is because I'm really near as in my system over here and the system is more stable if you just do linear linear announce of this the the eigenvalue there's only 2 eigenvalues in my network one of them is minus one because I've chosen some symmetries and the other one is this one and it becomes more negative and that means more stable system so here I've basically linked variability control and stability in this system right I'm going to make the system it's basically better able to dampen fluctuations over here OK. [00:24:10] So what about what about game here so for this we know that variance goes down with the tension but gain is supposed to go up with a tension that's that's the other that was from the people known this for a long time that the gain of primal cells or putative pyramidal cells 2 inputs go up with attention and if we go along this path here you can do a little bit of math and you can basically show that the gain is going to be is going to be proportional to the square root of the variance there because this is a simple mechanism only at Ian I suppose that's it so if I want the variance to go down and the game to go up I'm kind of a lock with this equation right because square roots monotonic right so what we can do here is we can put different feedforward gains on the how the see the stimulus and the ice else in the Simmons and you sort of uncover this components here and by by fine tuning predators we can basically get a scenario where the gain the gain goes up in this system and by fine tuning it's not that fine tuned basically all my intentional paths in sort of the change in firing rate space for the eye cells change for the cells change for the eye cells I have to have passed that sort of live in this green region here so it's not that point to me. [00:25:29] But I'm going to I'm going to sort of argue that a kind of this so that what I have what I have argued here is of I've linked variability stability and again in a way that I have now put some handcuffs on myself right there some areas in response based on not allowed to go I have to really had Bill symmetries that I mentioned or someone important so given that this papers published I can sort of say I don't like it I don't like it in the sense that that there's too many constraints that I had to deal with and I think biology doesn't want to deal with these things. [00:26:01] Or if it does it does it through special mechanisms that I don't know so basically are we asking for inhibition to do too much is this is this really the problem here that inhibition is is overtaxed in this simple model where there's just. And I guess I'm there's going to be it's going to be the case so we now know for the last sort of 15 years that there's this real zoo of inhibitory neurons in there that differentiate themselves in lots of different ways so I think early on people started for a very long time have known that there's tons of morphological diversity in the kinds of inhibitory neurons we see we see sort of mark Nadi cells we see Santa leer cells and so forth all these are different kinds of inhibitory neurons and they all look really different to Anonymous Anonymous of known this for decades right. [00:26:51] We know that these different neurons know only do they look different they wired together in the circuit in different ways Sol in particular they might project some of the might project to sort of the dendritic area of the prominent cell others project of the sematic area there's also different I'll talk about this there's also different interactions amongst the inhibitory neurons themselves that are really one of the things that got there are so excited about this. [00:27:16] We know there's molecular diversity this is been very important in developing special mouse lines that allow us to manipulate these cells in isolation from other cells so you have cell specific manipulations that I'm sure a lot of the people in the audience know more about than I do and do it on a daily basis here and then sort of like I was saying these days there's wiring amongst the cells that is that is quite specific here so this is sort of the new land that we have to think about Styris for how we should think about inhibition and I think a model for a therapist is something like this forget about it right it's just let's just go back to this 2 dimensional framework and that's how a lot of the even though this data sets have been known for a long time it's really only in the last 3 or 4 years have theories started accepting this is sort of the new way to think about cortex and there's very few of us because we're comfortable plotting things in planes and that's really the 3rd reason it was pretty simple people. [00:28:16] So I want to push a little bit past this today so I want to do this with considering models that have 3 different classes. So these probably in cells in visual cortex and in sensory cortex they tend to be sort of the dominant player there is the somatic Staton positive neurons that are similar in number of a little less as you go down the the pathway in the brain and sort of associate of course he's the sort of flip dominance here and then there's a bunch of other cells. [00:28:50] That subdivided many other classes and I'm really going to talk primarily about these 2 cell classes but I will talk about V.O.I.P. cells that also sit in this in the circuit yes now. Sure so the so I think they explained things really well when we focused on one thing if I want to talk about a model of oscillations No problem I want to talk about a model of game control no problem bone to talk about a model of stability no problem if I want to talk a model that does all of these things in a robust way problem. [00:29:35] So it's so we've been focused on explaining specific data sets in ignoring other data sets That's right that's that's that's basically we're going to in a similar way to you know if you want to explain to a project in a higher dimensional space it's easier to the machine learning sort of idea there this is an analogy to that but then how do we know we can't just have arbitrary connections between these cells so experimentalists are telling us that there are specific wiring patterns between these cells so how do those specific wiring patterns facilitate a division of labor between different editorial classes Well that's that's that's the idea here so so how do they wired together right so this is work for a mouse most Kensi Emmy's Lab This is in mouse V one or what he's done is he's looked at the sort of normalized interaction between the different inhibitory cell classes and primal cells and then the South classes themselves all go through this slowly so it's all normalized to this connection the connection of the P.V. cell onto the primal cell So P.V. cells definitely make that connection they strongly connect to themselves and they somewhat They weakly connect to S S T here it's not it's not that strong it's moderates right here as this T. cells they inhibit everything except themselves but they inhibit they inhibit P.V. cells quite strongly they also can inhibit the IP cells. [00:31:07] And then. Put in the Epi V.F.P. inhibit S.S.T. they weakly inhibit P.V. there's no arrow here just Card says there is so this this is subject to change as new data comes out here but we're going to look at a framework that doesn't connect and then we had an extra link to themselves strong as P.V. cells as sort of strongly the S.S.T. so unlike the truth dearest I'm just going to ignore the connections that are dashed when we look at strong connections here and I'm actually ignoring this other connection here so that I really have sort of a feed forward framework or Soviet only act as a modulator and this is. [00:31:44] OK So what does a circuit like this how does it produce responses here OK So this is work that was done some showing now that set the story up in the 1st. More than half hour which is good that's what I want to do this is work that was started by a shock Lou Kumar and Rob Rosen bomb about really sort of spearheaded later by I had a boss who's a great post doc in my in my lab here and we're studying sort of this circuit so what do we want to explain here let's start with some data so we know that flambé can put drives e cells in P.V. cells and we don't think the lemma can put in V one or in many sets record of these actually drive S S T cells Wes's T. cells only get input only get stimulus through the cells through that you see connection OK So this is work from Hello desk Nick when he was in mass most can see any flat and what he was doing is use recording firing rates from neurons that were driven by a drifting grating of a certain speed size a certain aperture here and what he did is he changed the physical size of that so it was a tiny input or a large input and he was always recording from neurons that were in the center reset that this input drove the center of their self the field so punitive sort of principle not neurons these are primal cells they they like inputs that are certain with but then when you get large when the inputs get larger and larger and larger they have suppressed responses that's also true P.V. cells they get suppress responses and what this paper was about was showing that a reasonable mediator of the suppression are the S.L.M. cells so the cells as the image gets larger and larger and larger They're firing rates go up and act putatively has a suppressor of these 2 cell classes here so if we think about what Massimo are saying is that as the input gets larger S.S.T. cells go up they suppress East and they are probably also suppressed P.V. cells but when they suppress East then you remove excitation from P.V. cells as well so they can sort of a double suppression if you will and. [00:33:44] We think of this descent have a Tory path where this this this motif in the circuit is driving this response that we see here so that's stuff from Massimo and hello so Bernardo rooty A working in Visual not Layer 23 a visual cortex but later for of some out of sensory cortex did another experiment where he was recording. [00:34:04] Cells and Fs cells of these fast spiking neurons or putative P.V. cells and these are principal neurons and here what he did is he up the genetically silence the S.L.M. cells so he just drove he drove there Hillard ops and so heterodox and channels you open them and then you overall suppress the system they're firing rates unsurprisingly went down the FS cells there firing rates went up and then. [00:34:28] The primal cells they're firing rates went down in this experiment so he's sort of arguing for this pathway right he's able to hyper polarized S.S.T. cells that removes inhibition from P.V. cells which allows them to inhibit the cells more effectively this is a classic this inhibitory path here. [00:34:46] So this is these are the 2 experiments right we to explain one set of data we implored this pathway and to explain the other set of data we had this pathway right so how can cortex is it possible to put models down that allow the system to toggle between these 2 sort of one question right so let's do this so when we I think I think we need to move we want to start studying these circuits we have to reckon the circuit in its full complexity with sort of all the connections we can't dissect the system this way and ignore all the other connections that are there right it's convenient to ignore them because at the narrative makes sense but there are more than narrative we have math right so we should be able to put this down in terms of mathematics so here's a scenario where I have of this recurrent model that I showed at the beginning where I have S.S.T. cells and now I'm going to imagine a scenario where I perturbed the population so that there's a change in their firing rate Delta R Us that's here and I'm going to use my sort of linear response tricks to say that this perturbation is going to be proportional to the purgation recorded here and I just have to figure out what's in the bracket that's the slope for this right so how am I going to do this it's relatively straightforward to do but you get these sort of cumbersome equations when you do this it's useful to decompose this in terms of path lengths within the network so a dominant component of this is going to be the direction at the connection between S.S.T. any cells that's going to be a big player in determining what the the transfer of modulation is between these well there's also all of the pathways that go from S.S.T. to primal cell that involved to snap the connections. [00:36:27] And the ones that have all 3 snap the connections and force not the dimensions and so forth you can get all the infinite series of paths here and the one thing mathematicians are good at is summing over things to give compact expressions but that's half of our day. [00:36:42] So and when you do that this is what pops out as you get this root relatively compact expression that in principle involves of course all connections here and that's where W. is the full connectivity matrix of the system so the overall magnitude of this is going to be scaled by every connection this one and this one and all the connections but the sign of this only depends on 2 motifs in the system and these are the 2 motifs that my experimental colleagues inferred from their data sets this was the one for Bernardo Ruiz dataset this was the one from. [00:37:17] Dataset and they trade off one another here but what determines who wins the tradeoff is the P.V. cells and that is somewhat non-intuitive by the P.V. cell connection J P P That's the self interaction for the P.V. cells here that determines things and so too does beta P. in the beta as I've explained yet the betas are the the gain of that cell class or the slope of the firing rate curve right that that gain function so as the firing rates of cells tend to get larger and larger and larger if you think of some power law response function for a neuron Beta will go up just the slope of that neurons response around the operating point I'm linearizing right so here. [00:37:59] We see this if if if we have that if we have this pathway effectively this pathway dominates if we have this pathway that dominates E.P.. Sorry. I want to say yeah so if we have if we have this pathway that dominates here we have this pathway that dominates then we're always going to get that this is negative like all reported right but if we're like Bernardo Ruiz lab the one where we had sort of the dominant disinhibition pathway here then depending on what beta P. is if it's sufficiently large then we can now have a scenario where this this initiatory pathway dominates and we can have that when we are up to genetically manipulate us we see that the easels go up like they go down like this like they said yeah. [00:38:52] So so here we're going to work in the regime where the system stable if the systems unstable I shouldn't be linearizing anyway. Right so we shouldn't be linearizing the system at all here so I will talk about stability in that in the next in the next slide so you're right so all of this is just looking at the perturbation I haven't talked about whether the system will go unstable and I'll do that I'll do that now we'll talk about stability here so it's important that we don't none of this math should be applied to a system that's unstable because then perturbations will run away. [00:39:21] So here now let's look at these 2 pathways in isolation Let's talk about now modulating that was controlling firing rates let's talk about this here's here's an example so here we're giving some positive modulation to this neuron here so the they're firing rates go up the green curves are a theory the dots are numerical simulations of the population integrated fire neurons it's the the green cells go up naturally here the P.V. cells they go down for a while but then they switch and they go up and that's because of this inhibit stabilize dynamics that eventually if you start. [00:39:59] Inhibiting the P.V. cells a lot then they'll remove inhibition from these cells which allows them to provide excitation to the B.B.C. ALS And that's why they're fragments go up these cells are fighting rates really go up. But now if we look at the game so what the game is is we're looking at the rate of change of the essential firing rate to a change in a stimulus the gain goes up for a while is non-monotonic but you shouldn't trust this there's a broken axis and here this is the simulation data point and this is the theory and they're way off way way off right so what's happening here if we sort of scan as we lived the system down here then we have sort of. [00:40:39] Asynchronous low activity state we look at the system here we start to get very high activity as you would expect rates are high but we start cease bending here and if we live out here you get this pathologic unstable So basically the fixed point in the system is unstable you get this really strong sort of rhythm that sort of silly this is a model of cortex. [00:41:00] If we. Are Responsible on this pathway things are really nice so the firing rates of the so here now were getting a negative input were imagine these guys firing at high rates and they were suppressing them this would be like these V.I.P. cells inhibiting the S.S.T. cells. We inhibit them as are modulation grows and then these guys smoothly go up together again smoothly rises we don't have unbounded gain but it can change a lot and if we sort of sample the system along here we get nice irregular asynchronous firing throat a large range of the game here so here we're saying that even though both of these pathways can manipulate the if you want to control the gain of the cells you should go through this pathway as opposed to this pathway and that's one of the one Another main point of this is that we're going to charge P.V. cells as being the stabilizers in the circuit so if P.V. cells are the stabilizers you shouldn't have them the middle men are middle middle managers for a game modulation you should have to gain modulation circum navigate the P.V. cells or project equivalently to even P.V. but you shouldn't ask the P.V. cells to both be a stabilizer and a modulator because you can push the system into unreasonable regimes here so let's let's take this a little further here so here will it put both connections back. [00:42:24] And then so here I mean visioning the network in sort of a state and this is a scenario where there's no stimulus some eventual going to stimulate the system and the essentials are firing at lower rates the P.V. cells are firing at higher and lower rates in the S. S. S. T. cells are firing at higher rates a stimulus comes in drives these guys to fire at higher rates but here I haven't modeled this connection here I will in a 2nd this connections absence so the S.S.T. cells it doesn't see doesn't see anything here and there now what a modulator comes in that modulator in this extreme scenario fully suppresses the S.S.T. cells and that removes inhibition from the E. in the piece the sighted Thore the P.V. cells and their firing rates go up so that's a scenario where we can have a modulator increase of the gain we can see that the response of the to the stimulus goes up when the modulators on but now let's put this connection back put this connection back here so the 1st components basically the same because the cells are firing rary much so who cares about this connection so now the stimulus comes in and the stimulus comes in and now the essentials now feed that stimulus to the S.S.T. cells and then there's negative feedback that comes back and then that makes the S.S.T. cells fire higher rates and they supply that S.S.T.O. suppresses these 2 guys to make them fire at lower rates that had that connection been absent. [00:43:50] And the now the modular comes in knocks this guy down and this bar in this bar the same in this part of this part of the same so we can see by having this connection present having talked to the S. S. T. cells now when we have a modulator that affects the S.S.T. cells it can be more impressive we see that the change between here and here is larger than the change between here and here and that's because of Rick that the S S T cell is part of the recurrent pathway that seizes go the games you can play with these sort of these sort of circuits and then sort of the last slide I think last couple slides here now to talk about sort of noise correlations in this in this circuit here so this is the noise correlations between a pair in our circuit. [00:44:32] It's going to have 2 sources of noise correlation it's going to have noise correlations that are due to the interaction between excited Torrie M.P.V. cells there's this recurrent loop and the system is not perfectly balanced in this case you're going to get noise correlations that are internally generated these nodes correlations of course are going to turn from these this this is basically the input correlation and it gets mapped through the gain of the excited for the cells there is to excite it's always cells so it's gain squared this is correlations and then these aids are basically the autocorrelation of the cells and this is the autocorrelation the P.V. cells if these were processes these would be equal into the firing rates of these because this is the internal correlations there's also correlations due to the fact that S.S.T. cells will project in a way that provides common inputs to the cells here and this is this term here and this term has those 2 recurrent structures that we talked about earlier in the system so if we choose P one to be 0 which is tantamount to saying effectively that that the S.S.T. cells do not project to the P.V. cells so we've removed this projection here there's no projection here then we get that the total the internally generated correlations as a function of a modulator they go up here. [00:45:54] And then the external correlations they go up for a while until they go down in the reason they go down as once the modulator starts to really. Once the mother modulators going to suppress the the S.S.T. cells so if you start turning off the S.S.T. cells then eventually you'll turn off the shared input they provide right just turning them off so the sure to put goes away and then sort of the this is the this is the whole component it's non-monotonic as well or so what I want to say is that now if you add this connection here then effectively you dampen this effect here because what's happening is you're adding common into. [00:46:32] What's here but then those same common inputs will go here and then inherit a minus sign and come here and cancel so it's that same cancellation argument but at the level of 3 cells 3 cell classes here so you have common fluctuations that are direct and indirect that inherit a minus sign and those then common fluctuations will cancel this can really be exaggerated if we put a connection from here to here everything just gets squashed because now I have cancellation mechanisms happening here cancellation mechanisms happening the way I just said and then another cancellation mechanism that's happening here so all these recurrent loops between excited Tori inhibitory neurons can can support very very correlations in the same way that we talked about with. [00:47:18] Phones or not and I mean the model but this extends to sort of 3 classes of neurons as well or. So the take home message is that P.V. neurons really act as a stabilizer and so what's a stabilizer supposed to do the danger in cortex is this connection right this is the recurrent excited Tory connection this is what wants the system to go up and up tick and inhibitions job for stabilisation is to track and cancel this return excitation So if you want to be a stabilizer you have to 1st inhibit the cells a lot. [00:47:52] You have to receive input from to know when they're running away but a stabilizer should also be subject to the stabilisation current and that means that the P.V. cells have to self inhibit when they're doing their job and are going lower in firing rates they should also do that and the easiest way to do that is to receive the input so basically the in degree for the essentials it gets this in put it gets it gets P.V. and put the P.V. cells at the exact same connections that's what you want to do if you want to track and stabilize had the exact same input as the thing you're trying to cancel right S.S.T. cells don't do that they don't have to suffer connections. [00:48:32] If you want to be a modulator the worst thing you can do is have a soft connection if you're a middle man for some modulatory signal why would you get a modulatory signal and then inhibit yourself that's just going to lower the gain to that modulation that doesn't make sense so S.S. T. cells are known not to connect to one another so they're perfect for modulation and their modulation can exist over a large range of modulatory States so long as the P.V. cells are taking the job of the stabilizer if you try to do this which if you try to modular would just S.S.T. cells and it's not going to work if you want to broad range somebody has to stabilize the system so there's this division of labor that allows cortex to have a robust solution over a wide range of states in an easy way so that's basically my take home message so in this sort of hopefully sort of I don't think broken these change but sort of weaken them to maybe rope chains or something. [00:49:28] Where with actively we can have that stability in gain are conducted by different owner onse we can sort of increased stability and suppress noise correlations if we have excellent cells projecting to both in P.V. cells normally stability in gain or opposing frameworks and noise correlations as a measure of stability but here we can decouple them and we can amplify again as well here. [00:49:53] So just to tell us a hoot who did things actually didn't talk about any of stuff here so she inherited a bold here but the most of the stuff was was spearheaded by Hannah from an initial model from a shock and Robin this is in collaboration with us and Marie Oswald at Pitt and I'll take your questions thanks. [00:50:40] Writes a white wire V I P's the middleman so we do know that in my initial plot we had that the IP cells are inhibited by a 60 cells right so they're not you're right it would be kind of silly to have V.I.P. cells just provide feedforward inhibition Tessa's T. cells and then S.S.D. cells do all the job right so there is this interaction loop so I guess one conjecture would be that if there's different sub and then roughly used as published data on this if there's different subgroups or assemblies of excited Torrie neurons they might share similar assemblies of excellence cells and V.I.P. cells so this inhibition of inhibition. [00:51:18] Is a good way to get sort of winner take all dynamics right so if you have multiple modulatory signals that are coming into cortex then there's V.I.P.'s O.-M. interaction could mediate a winner take all and then allow one of the 2 modulators to win right so that's why whenever I see inhibition on inhibition there I tend to think winner take all so that would be one low low hanging fruit conjecture is it allows the system to toggle between 2 different models or states. [00:52:03] Yeah. That's right. It. Yeah it's an applied mathematicians lemmas always how small is Absalom right. So yeah I guess the general philosophy is if you can explain a lot without appealing to what you're saying then I'm happy I'm happy with linear models because I can understand them to make predictions so it is there are sets of data that I can't explain with the linear model that would preclude me so that's not really answer you're answering the quote you're asking the question Do these intuitions hold over to larger fluctuations so that. [00:53:28] So that would be something someone could do numerically so we are theory and Sims wouldn't match but we can ask because a lot of what I was talking about was qualitative changes firing rates went down again went up right so all those are sort of qualitative behaviors so it's possible that they would persist for larger and larger fluctuations because I'm not really asking for quantitative things much for qualitative so we could try that we haven't tried that push short of our earlier regime. [00:53:54] To push these fluctuations larger That would be one way I would directly try to answer that question pick a phenomena and ask those of persist when the fluctuations get larger right that would be the direct but until until we do that I guess I can answer your question. [00:54:22] Here. So so the as my modulator changes I'm always analyzing my system with linear techniques but the point at which I'm linearizing around changes as a function of my modulator right so I'm exploring the nonlinearities to get a truly linear system none of this would change right so I'm so I'm exploring my non-linearities by changing the operating point soul in machine learning is specifically talking with that sort of threshold non-linear D. in sort of deep networks. [00:55:03] Sure sure so those those are those sort of suppress unwanted noise and allow signals to sort of come through the nonlinearities that are here I mean we saw that in this example here yeah so here my modulator when my modulators on and drives my S.S. T. cells. So there's that's when they discovered they shut completely off so I need a non-linearity here to make them go to 0 and not negative right so there's a non-linear area that is intrinsically playing a role in this in that I'm basically saying a modulation could completely silence these cells right. [00:56:02] Right so I guess the difference the different time scales of my model timescales will change as we sample this because I'm changing really near the point so I haven't looked at looking at how these modulation changed time scales yet just static responses so that could be something interesting in asking how does the non-linearity that's inherent Here is the time scale of response difference in the modulator state versus the state the operating point change the eigenvalues change time scales change how to change is a good question. [00:56:40] And follow up. One last time. Where. He thought. Yes will allow the intuition of the take home message here was focused on the lack of a connection here and the presence of a connection here so that was basically what I would have focused on the fact that there's strong P.V. P.V. connections suggest that there are stabilizers in the lack of it that S.S.D. still suggest that they're not in and hence monitors so when people get me more and more data I always tell my experimentalists friends when you're building up a connectivity Matrix my favorite number in the connectivity matrix is 0. [00:57:27] Find the lack of connection for me and then I can start building hopefully there's views tell me everything is connected to everything you know then once could be a simple cortex and to it's going to be you know in an interesting theory so as I start getting more and more data sets I'm going to want to find the absence of connection because that was what the key insight that allowed us to sort of push this was the lack of connection here and the presence of connection. [00:57:59] Yeah as long as as long as as I admire more Durand's in there the number of edges I'm adding into my graph it would be great if it scaled in a way that that wasn't. You know and squared. So I don't know that's what's going to have to happen if that happens I can I could have a job for another 5 years of a dozen I'm going fears on those.