[00:00:05] >> Everyone named Simon's wonder going to Colgan I's the no seminar series here with Will and they we have a special treat Professor gracelessly is a relatively new professor here at Georgia Tech she's been here for a little more than 2 years now she's in the School of Music but her training is very interesting Larry crossing from performance music economy to Sciences she got her Ph d. e c s d where she worked at the short center and she's done just a whole bunch of amazing things since then she's worked at the research at the Ikram in Paris which is this scientific institute for the Study of a long hard electrical electronic music she's a performing flutists and she calls herself an electronic dance improviser which very cool and she looks at the integration of how she can integrate brain computer interfaces both into her performance and into her research in neuroscience she's worked at the interdisciplinary. [00:01:06] The new come into its institute French disciplinary computation Dartmouth where she worked with people at the Media Lab at mit and she was studying how you can combine expand enhanced the expression of music through the sea ice so it's going to be a great opportunity to I hear about the diversity of interest science and its interface here in georgia tech So Dr Leslie please take over for us. [00:01:32] Thank you so much Simon this is definitely a special treat for me as a music technology professor I tend to live in the world of applications of neuroscience concepts to seeing how we can. Apply the little that we know about the mechanisms behind how some of these theories work into things like music creation and understanding a little bit more about music perception this is a real treat for me q. branch out to all of you and see if I can if we can get some conversation started about understanding a little bit more about what is happening in these projects that I'm going to be describing to you today so the title of my talk today is in a rhythm music technology informed by the brain and the body I'm visiting you from the brain Music Lab at Georgia Tech which is housed in the couch building in the School of Music where we have a e.g. lab set up a $64.00 channel act each camp set up with a recording sound booth and then we we use that technology and also consumer grade e.t. heart rate monitors breathing monitors to send live physiology signals to software that we can then use to analyze how people respond to music that they're listening to or take those signals and turn them into music so that the people can hear. [00:03:18] The changes going on in their physiology so we also investigate biofeedback software that we can create using music. So I took the title of my talk from this quote by Ed Carver as who was one of our forefathers in the field of experimental music and he wrote I dream of instruments obedient to my thought and which with their contribution of a whole new world and such so unsuspected sounds will lend themselves to the exit and seize of my inner rhythm so back then that meant for whereas the he was developing new kinds of technology to incorporate new kinds of sounds into the orchestral compositions that that he was creating at the time but I would like to think that of the Reds was working with this creative concept concept today knowing. [00:04:20] You know what we do about brain signals how we can record them in real time and turn them into sound I would think that he would be thinking about work that is similar to what we're working on in our lab that So this is almost 100 years later we're taking this concept and trying to reveal what's happening with these inner rhythms by converting brain and body signals into sound. [00:05:32] The the. The Ood. So in this you know field of research that I'm going to be describing you today I'm working with all of the different phases of this feedback loop that you see here where we're developing methods to streams physiology data in real time to custom software and that software is passed with finding ways to classify and describe what is going on in that participants body in terms of their their cognitive state and their affective state I thought times and then we take those custom vacations that we develop and we find ways to control algorithmic music and video software based on those measurements that we're taking and then we can actually use the way that we map these costs the cations to the audio and video to try to influence that participants cognitive and affix to state so there's the whole feedback loop and what I'm going to be describing you today. [00:07:12] Is a number of projects each of which encompass one or 2 or even 3 parts of this loop and all along the way you'll see that there's a common theme where we're looking at the shared mathematical properties that exist between these physiology these. Signals that we're measuring from the brain and the body and the audio signals that the participant is perceiving in time so you can see an example here on the left we have a spectrogram of Alvin very violin concerto where we're looking at what the. [00:07:55] See content is in time and then on the right here we have the easy read out from a participant who is opening and closing their eyes sometimes and since I'm a teacher I will ask the crowd can we think of a couple of ways that were sheer sharing some are seeing some shared physical properties in these in these 2 signals you can you can write that in the chat window if you're too shy to call it out and it's Ok to think creatively for anyone. [00:08:46] I'm going to call on some of the names that I recognize Ok So asking me Can I ask that again I'm just I'm showing these 2 spectrograms here on the left that we have a spectrogram of the music that somebody is listening to and then on the right here we have a spectrogram of a participant. [00:09:08] Unrelated participant who is opening and closing their eyes and time but I would like to ask you if you see any similarities in this data. In you know something in the in the. Signal properties that we see here in these spectrograms. The jury asked is the participant blinking in time to the beats so this is actually not. [00:09:35] Right Ok So Christopher says you read them so yes you can see here that there is a kind of you we can actually extract from this specter room here what the tempo is of this violin concerto by looking at. How many seconds elapse in between these high amplitude vertical stripes that we see and we can see the rhythm in the octave register to say this is what. [00:10:08] Mr Wu has said so so yes we can actually look and in addition to this you know vertical temporal information that we can extract from both of these spectrograms we can also extract. Spectral information by looking at these horizontal stripes as well and so we can see not only what the relationship is between these events that are happening in time along the x. axis but we can also look at what the relationship is between the various frequency components that we can see in our signal here and we can see actually that there are a number of. [00:10:51] Harmonics that we can see of our. Fundamental frequency which are these kind of evenly spaced stripes in the left spectrogram example that we can see there but we can also see harmonics in our e.g. signal on the right there so even though when somebody opens and closes our eyes we we can register that in the large amplitude shift that we're seeing in the Alpha range there in this case it's around centered around 11 hertz so we can actually see a indication of that in the $22.00 Hertz range as well so we're actually seeing a. [00:11:34] Integer multiple of our fundamental frequency that is occurring there so that you know this is just an extremely simple example that I wanted to show you know what the really funder of the fundamental underlying assumption is that we are kind of taking to our creative extreme and in all of these works that I'm going to be showing to you which is that we can start to really creatively think about interchange ing these these you know Sonic signals that we have over time and these. [00:12:11] Inner rhythms and that we can extract from our out brain and body signals that were recording as well. As a 1st example here I want to describe some of the 1st work that I did as a graduate student which would really in compass this 1st part of this be back loop where we are setting up a controlled experiment playing musical samples to purchase the pens and then trying to judge what what to see what what we what we can extract about their active and cognitive state from that experiment. [00:13:00] And so in the fix in the stock experiment what we did is we played musical samples. To our participants and we asked them to move their hands expressively to the music and a very simple conducting motion and you'll see in this animation what this looks like we actually connected a motion capture system to our participants hands and played back this real time animation for them on a large t.v. screen in front of them and we gave them this task of imagining that they have a friend in the next room. [00:13:41] Who cannot hear the music. That they're listening to but they're really longing to know what the music is making them feel inside and what we found was that we were able to just by tracking just this single point light animation of our participants movements a Internet audience who saw these animations animations afterwards were actually able to correctly classify what emotion was in the music sample that the participant was trying to express and they were also able to determine if the participant was actually fully engaged in the music that they were listening to or if they were. [00:14:33] Distracted with a dual task which in this case was performing a. A difficult math problem in their head at the same time and when we compared the 2 different sets of expressive movement performances here those where the participants were fully engaged in the music and very. Attentive as to what their friend was actually experiencing from their expressive movements when we compared those those trials to the ones where they were performing the dual task we saw a difference in this independent component that was elicited at the beginning of each one of these expressive movement cycles and this came through as a burst of synchronization here in this temporal parietal junction area on the right side of the brain and very similar research has shown that this area of the brain becomes particularly active during tasks that require theory of mind or basically placing your mind into the mind of someone else so we saw an indication here that this. [00:16:06] That this think renovation of movements leading to this expressive movement was actually integrating motor planning with this theory of mind idea so we're actually showing that this idea of expressing some train in are in motion through this cyclical movement was actually tied to the motor planning of that movement itself. [00:16:37] At the same time that I was performing this work trying to see how we can describe in cost if I internal aspect of state during this music and gauge meant I was also tasking myself with figuring out how we might be able to take those judgements that mean that we make of people who are listening to music and what they're experiencing internally and how we can translate thought into creating new music. [00:17:07] So at the time my colleague Jim Molan and I. Found a way to create a model in the lab of high focus or high engagement in musical listening and actually use that to control a algorithmic composition engine at the time all we had available to us was a single channel Neuro Sky headset so this is actually showing a at this point the original instantiation of this project we ended in 2010 but you know we can see here an example this is a very classic example of taking a classic d.c.i. approach where we're trying to classify what is going on for a to user and what their intent is. [00:18:02] And basically translating that into some kind of feedback so that over time they can learn how to control a cursor on the screen or the way that a musical composition is you know is created so you can also see in the example that I showed you before. The. [00:18:28] Performance. Example that I played for you at the beginning of this talk was an example of actually taking a very similar software as this and then and then using this exact same easy calculation to control for music. Tugendhat on a very complicated synthesizer program and what I actually found was that this kind of approach where I go on stage as a performer and then I make these kind of. [00:19:06] Classifications of what I am experiencing cognitively or aspect of Lee on stage and then translating that into music. Actually became a very limiting thing to do I found that I would end up building a model of my of my experience in the lab so that I could control using alpha biofeedback my level of focus and then be able to create changes in a musical composition on stage did not give me a lot of expressive bandwidth to work with when I was on stage I ended up finding out that as a musician is a lot more satisfying that kind of bypass this complication stage where we're working with trying to classify cognitive. [00:20:00] And rather to work with outer rhythms that take raw data perform some necessary filtering to get rid of muscle artifacts and other kinds of you know time series information that we don't want to hear in our signal but then find ways to more or less comfort convert that raw time theory of data directly into music. [00:20:27] And with that algorithm I basically started a new kind of music performance practice where I go up on stage and instead of using a brain computer interface to somewhere I've trained myself to be able to control certain. Indices in the software I now have developed. Ways to control lower level e.g. features and have that information that comes through in that spectral information in that time information translate more directly into that sound signal and so play a Zen pl of what that sounds like for you know. [00:21:15] Food. The fwiw Iraq. War or. The ever. Iraq war. Vote. I can describe here a little bit about what's going on musically in a sample here you can see and even as compared to the previous music. Bring music performance that I showed you at the beginning of the stock there is relatively little movement here and there's a limitation here how much muscle artifact I can take out of my signal in real time and that forced me to learn how to perform without moving very much power to move in a very specific way that I know is not going to interfere with my lady. [00:22:43] And you can also hear that as far as music goes this is a very relaxing calming effect and when I found about the kind of limitations that are placed on me as a performer in needing to have the kind of. Control the ability to influence the e.p.a. data that I'm sending to the software actually forces me to play music that sound much lower much more contemplative all of the. [00:23:12] The vents are kind of spread out in time to provide me. With the the space to be able to have this kind of control over that signal. And so as a performer this ended up being a much more satisfying. System to work with I knew that I had come up with with something that was very musically unique because it actually made the music sound a very particular way in response to how that algorithm had been engineered. [00:23:59] Ok this is a shameless plug for my music website where you can stream. All of this music for for free if you would like to hear this so this is great sounds at band camp so I'm going to shift seems a little bit here because for me the most satisfying part of this work is trying to think ahead into the future and see. [00:24:29] The kinds of an engineering that I've been able to do in service of the creative impulses that I have as a musician seeing how those actually might be useful in a clinical setting in the future. I had the chance to work with a few. At the apologists and neurologist at Dartmouth Hitchcock hospital when I was a fellow up there to see me in one case how I might be able to take the fig Zack algorithm that I developed as a performer and to see how we might be able to apply this for the diagnosis of different seizure types. [00:25:17] This is the algorithm in a in a simplified nutshell here we're basically taking the the spectrum of the incoming e.g. signal we're taking the the spectrum of incoming sound or sound that is stored on a computer hard drives were and then we're taking those those 2 spectra and multiplying them or convolving them in the time domain and then what we end up getting is the scum bind e.g. sound which almost sounds like we're kind of smashing together the the spectrum and time frequency information that we have from the c. e. g. and we're kind of imprinting this onto some kind of sounds in the case of my performance work I use a vocal and sounds that I'm recording in real time but we can use any kind of sound for this and in the case you know with the work that I did in the hospital we also applied some spatial filtering so that we actually looked at what the you know 1020 location was for every e.g. signal that we recorded from a patient's e.g. when they were undergoing monitoring in the hospital we applied spatial filtering to that so that when you heard the resulting sound on a pair of headphones you would actually hear that channel of the e d coming from a similar locations. [00:26:48] As what the channel was on the participants had so if the if the the e.g. was recorded from a cd then you would hear that that sound coming from the you know of something or of your head in the front so we actually found that when we play a stupid piece resulting sounds to people with. [00:27:17] Seizure identification training and people without that we actually had a surprising level of efficacy and how they were able to identify seizure activity from background activity in the e.g. but also they were able to classify different subtypes of features. So I would find that you know I think that this could be a useful clinical tool in cases where sometimes in hospitals they are paying you know people with seizure cost cation training to look at a video e.g. readout over time if if a patient is being monitored and the one thing that the auditory system is very good at identifying is when time series data data starts to change from some kind of background rhythm. [00:28:19] And so in a lot of ways being able to identify the seizure activity through the ear might actually be a more efficient way to do this than by looking at a visual readout So this is just a kind of technique that I have proposed to the epilepsy community that you know perhaps in the future this might be a useful thing to use. [00:28:48] I'm also going to talk today about a few projects that involve the other side of this feedback loop that you see here so how can we take the kinds of assumptions that we have about music knowing knowing the way that the auditory system works and how can we engineer new kinds of music to invite particular kinds of physiological signals in the brain or in the body so during the same fellowship as I mentioned to you before I was working with Dr Barbara Yost who is one of the inventors of the neuro pace. [00:29:29] Electrical stimulator that is designed to be implanted in the brain and detect. Seizure and activity in a epilepsy patient and then produce a kind of flat square wave at 60 hertz of electrical stimulation into the brain to interrupt that seizure activity. And then so. As a music composer an electronic music composer when I learned about how this technology works I thought you know this is very similar to you know the way that I work with square waves and sound and I knew that we create these kind of auditory evoked potentials as listeners in the brain when we hear musical sounds so I thought you know even you know mostly as a creative idea it would be neat to see what happens in the brain when we produce these kinds of regular auditory pulses and to see if we can actually train beneficial stimulation now of course at the same time I had been a post office at the mit Media Lab where Annabelle singer our colleague was a postdoc working with a sigh they had. [00:30:51] Figured out that you can create a train a visual stimulation a gamma band frequency and that that can actually create beneficial changes in Alzheimer's as all a g. and mice. So I was actually working in the effort to of computing lab a few a few doors down from the when I heard about this research I instantly thought well you know we have to find some ways to. [00:31:23] See if this can work in sound as well. So when I move out to Dartmouth We then started these experiments where I was engineering musical sounds and I can I can play and for you an example of what one of these sounds like. I think damn well. Knew that all along that it engineering. [00:31:52] You know any fundamental about about 40 heard my dream only and I mean the only modulating over when the string mean in the harmonic rhythm not 40 or. So and So this is actually one of the more pleasant stimuli that we used in our 1st. Experiments that we published last year our results where we were able to 1st show that the Century moment is actually happening in the brain so this is an example of what we're seeing in our insured cranial data here where we see a a 40 Hertz spike in response to these 40 Hertz engineer town but more interestingly Robert Kwan who's a Ph d.. [00:32:42] And a Ph d. student there applied one of the templates matching filters that they've developed in the lab to be able to count how many intracranial sorry. I e D's or. Enter Dole and pull up to form discharges that we see in the patient's. Intracranial data while they're being stimulated with these sounds versus the baseline periods when they're not and we found that for the patients who tended to exhibit very high numbers of these spikes in their data. [00:33:27] What this that this stimulation actually reduced those spikes so it showed that there was an effect of this stimulation on one of these markers of epileptic form activity in the brain what we're doing right now is trying to investigate different forms of stimulation and we're trying to you know figure out what the best kind of signal is to use and if. [00:33:59] Other frequencies besides again have a greater or lesser effect for their patients. During my time at the Media Lab I was really interested in seeing how we might be able to have effect somebodies breathing and in particular if somebody needs to attend to that signal very strongly or not that we actually ran this experiment where I produced a music that. [00:34:32] Displays are kind of imparts the feeling of a really idealized breathing on the low figured out from previous research that somebody breathes 6 breaths per minute that actually creates the most ideal restful state and so I produced music that breathed 6 breaths per minute and sound something like this. [00:35:06] And we actually piped this news that into our experiment chamber while subjects were doing a really boring reaction time task and we told them that the news that was in there you know simply to provide you know a little bit of stimulation for them because the the. Task was going to be so boring and we recorded 3 thing we recorded galvanic skin response we recorded e.g. during this reaction time task and we found both that there was a very strong effect for the breathing of you know as compared to baseline when they were introduced to this music we also found that it was very important if we actually personalized the the. [00:35:56] Tempo of the way that this music was behaving to they are. To their own natural baseline rhythm. So actually if we introduced in music that was breathing at 75 percent of their normal rate they actually slowed their breathing down more we weren't really interested in knowing that as a design principle if it makes sense to try to continue personalizing these kinds of biofeedback. [00:36:23] Systems or if it would be just effective simply to play a cd of relaxing Spahn music for them that would be in the case of the the the orange bar that you see here so actually personalizing it did did affect them more and we saw a similar effect in the go on external response data also in the the e r p that was elicited from from this contingent negative variation task that we had given them so this actually show that they were focusing more on this task and in their their heart rate as well. [00:37:05] So this is just a 3rd example of seeing how we might be able to use musical sounds to affect the body my Ph d. student Mike Winters. Recently graduated actually last spring and he was interested in this case where music performers actually play their their their heart beats to the audience I know that for myself you know personally playing my heartbeat to the audience that has often gotten the most. [00:37:36] Feedback from them. Where they basically say that's a very powerful experience and some people have have had mentioned empathy as a possible mechanism for that so Mike was actually wanting to investigate that hypothesis further to see if hearing other people's heart beats has an effect on one's own heart beats and also on any empathy that is involved in that process so I'll just play this quick example for you I want to ask you if I played the heartbeat of 2 different people which of these people with you rather sit down to have coffee with. [00:38:30] This is the person on the right. This is the person on the left. Maybe you can joy can't hear the heart of the audio Did anybody else have trouble hearing on you the cannibal or that I am about all. Ok but what you can hear here is like on the left you can hear somebody has a much slower her regular heartbeat on the right hi Hong. [00:39:08] On the right you can hear that there's a much faster erratic heartbeat here so what we can see is that we're getting this information and Garrett says that he likes the person on the left and that was the answer that I was going for here so you can see here that you can actually gather a lot of information about this person just by listening to their their. [00:39:35] Heartbeat and you guys are probably gathering that best person and Mark is asking for replay to Ok I'm going to play the person on the left here. And then this is the person on the right. And Joyce said that. They can hear it but just had to turn up the volume. [00:40:23] So these are actually the exact signals that that Mike used for his experiments here these are heart beats that he simply thought it's you know similar to that 3 things work that you saw before these are kind of synthesize to impart a particular personality Mark says the right one has him when he opens the grant notifications letter. [00:40:49] And then Garrett said the person on the right field more like me and I and I don't want to have coffee with myself Ok. But we can see how we're getting a lot of personality information you know from this listening to this very simple body signal here I'm Mike wanted to look at what the interaction here was with empathy so we used the classic empathy measuring task called reading the mind in the eye and I'm sure that most of you have seen this before and he was attempting to look at 2 different kinds of empathy on the left we have what is called cognitive empathy or the ability to basically give a correct name to what somebody else is feeling so in that task there was a ground troops here and that each of these actors was was asked to portray a particular feeling and then we can see how how correct are produced that that was being able to rate what the actor was feeling here. [00:42:01] On the right we have what's sometimes sometimes called after active empathy what Mike ended up calling emotional convergence but it's this idea of how much you're able to connect with that person and feel what they are feeling so these are. 2 different forms of the phenomenon that that we call and see. [00:42:26] So on the left here is really how we're able to put a name to what that person is feeling. Thank you David and then on the right here is how we can how much we kind of open up the information channel and are actually able to feel what that person is feeling so we cave them this task of seeing these eyes while they were listening to only the audio while they were seeing the eyes and listening to the audio we gave them slow fast beats and we gave them fast. [00:43:03] Heartbeats as well. So we can see that there is a very complicated experiment design here but what we found was that the presence of the heartbeat changed the way the that the participants rated these actors eyes so it has an effect on the cognitive empathy they actually they also increased the aspect of empathy when they were in the presence of the visual stimulus as well so there was no difference between the visual only and the audio only conditions but then when we introduce that heartbeat sound to. [00:43:46] Somebody viewing those I've then we actually found that there was a large increase in this. Reported feeling that they were feeling what that person was was feeling. We can also we also saw here that when when the heartbeat speed was kind of grew and with the Be ground truth emotion of those eyes we actually found that a mismatch in that can grow and see very often changed how the person rated the the feeling that those of that that I stimulus was portraying And in addition when the heartbeat sound was concurrent with the visual stimulus we had a large increase in the effect of empathy reading. [00:44:41] There was an effect as well on the the participants heart rate so when they heard it in this audio only stimulus condition their their heart rate was slower during the slow. Slow stimuli and it was faster during the faster stimuli these these 2 plots however are showing that we actually got a more significant result when we looked at only the 1st half of the trials so that was this is kind of a short lived effect it only lasted for about 30 seconds but we did see an effect there and when we looked out when we compared. [00:45:24] People who measured. On a a personality test of being more empathetic people we actually showed that they had a larger heart rate shift during this kind of heart rate listening. So the most interesting part about this for me was when I looked at the heart part that evoked potential in there e.g. So there's actually a peak that we can look at when we when we time lock our data around the participants our heart beats and those heart beat about potential is thought to be thought to index somebody is in terrorist action at that time which is basically the degree to which they are aware of their own bodily processes and we found that when we introduced this audio visual stimulus as comparable to the visual only we saw a drop in the independent component that we attributed to this heartbeat of oped. [00:46:33] Potential. Showing that there's a reduced peak there in that you know possibly that we have less of this in terrorist action happening when you are hearing somebody else's heartbeat so we can you know kind of take this one step further for going to interpret them basically shows that when we're listening to somebody else's heartbeat we are basically putting our own spine into that person's body and kind of experiencing their their heartbeat as our own we are we are paying less attention are we are less aware as to what's going on in our own body when we are more aware. [00:47:16] As to what is going on in somebody else's body so I'd like to wrap it up here but I hope you can see how you know when we look at all of these projects together that we're basically trying to elucidate more about how this this whole thing back process works and you know really what I'm trying to work towards is figuring out not not only how we can create these new kinds of music that are going to. [00:47:46] You know bring these new experiences to audiences but also see how we can move towards bringing them well being as well and I want to thank all of the people who are on the slide here today all right great thank you Grace everybody is silently applauding I wish we all had easy monitor of the form but. [00:48:17] We have a bunch of questions in the chat it seems like you can see the chat fine so if you want to just take those I think you've got about you as you prefer Ok so maybe I will start with the more with the most recent 1st and then we can move back to previous that. [00:48:35] Richard Nichols says. Thank you Mark I'm sorry I'm going to be thanking everybody along the way. So Richard says To what extent are other autonomic variables affected in association with effect on the heart rate such as cortisol secretion that is a very good question. You know I don't I don't work with the technology that allows me to monitor cortisol secretion in real time I'm really interested in. [00:49:06] In technology where I can monitor in real time and then convert that into feedback right away you know because I started out as a e.g. person and then kind of moved towards the physiology after that but I can tell you that there's been you know several lab studies that have showed. [00:49:27] That things like cortisol secretion dopa mean as well you know can be tracked and showing that there's a tight link between neurotransmitter release and these kinds of spikes in autonomic activity that we see during peak musical experiences. I guess that's that's the most I could answer that question since in the container a lot of it that we have right now. [00:49:58] All is there Eric so how would you expect the closed loop feedback for music to change if we had different recording methods for human or physiology so better resolution or access to more localized brain signals. That's a very very good. Question I think that. You know when you're bringing up this closed loop versus open loop effect. [00:50:23] There's a very strong effect of training in this so. We have to understand that when a performer goes on stage you know just like if you saw. On Sophie moved her go on stage and perform the violin well this is a very complicated physical system that she's had decades and decades to perfect through this kind of you know closed loop system what these very strong changes are that she's going to make our story very subtle changes that she's going to make with her finger position and bowing to create a sound and and a very similar way there's a process going on in there and when somebody learning brain music performance develops this kind of simple algorithm but then most of that expressive potential comes in this kind of operant conditioning of how the the the human is learning how to make these changes in their e.g. to produce a particular sonic difference in the sound so you can create this kind of expressive arc over time and so and so when we have more resolution I would necessarily mean that say that that would mean that we would have a better performance coming out of that it would probably take more time for the person to try to figure out through that process of experimentation how to develop ways to to control that but one type of experiment that I would love to you. [00:51:52] To work on with somebody here in georgia tech would be to take a group of people and to valid to develop a f.m.r.i. bio feedback system using you know one tiny mechanism there and to see if they can learn through this process to control. Their their feedback you know just by having that biofeedback present. [00:52:21] In the be a scanner. So I will. So I will end Eric's answer there. But he just reading back to some early earlier questions I don't think that we have any that I didn't answer yet. There was a couple impressions. Yeah let's see so in about mentioned those really interesting research examining how well moving to Tech their own heart beat the us is getting back to the terrorist action idea of how how aware are really of our autonomic nervous system operating as it's operating and about mentions of those deficits in the stability and disorders like autism so yes you know actually in in autism we have a destiny a deficit of this theory of mind right but we also have a deficit of the those on personal interest option as well and so that's very interesting that you would see that that link between those 2 deficits but also in the result that we have here that you know when we were kind of witnessing the Centerra Cept of process happening in the attention being transferred away from mom and that we're seeing the the the process of interest section being interrupted here. [00:53:52] So Joyce says Have you ever thought about trying to produce brain effect with a choir or a group of performances versus just an individual so I have done some performances in small groups but the you know I also need to mention that I'm not the only person doing this work and there is some you know some some colleagues of me of mine for instance. [00:54:16] Who. Created the the company that that creates the muse headsets interact on they actually gave a performance where they handed out dozens of these to people in the opening ceremony for the I guess there was a lympics in Toronto not too long ago so that was a a very early brain performance that was actually looking at you know you can. [00:54:45] Run some interesting calculations looking at correlation between. People's brain waves so there has been some work without before but I've really been focusing on this kind of solo work I'm so high asking during e.g. another signal recording anything else you try to remove motion artifacts except trying to be steady Yes So I mentioned that I did my Ph d. at the Sports Center for computational nurse science they released they maintain e.g. lab and they also have some real time. [00:55:23] Software that they've developed called p.c.i. lab and there's an algorithm in there called artifact subspace recognition which runs like a sliding window pca on the signal and you can actually extract the motion artifacts from from e.g. that way so. I don't use the software for every performance that I run but if I'm willing to have 3 or more computers that I'm working with on the same time that usually takes an entire laptop just for itself but I I have done that before. [00:56:00] Though there are definitely methods that we can use Of course they work a lot better in real time. So you know the limitation is what we can do on the stage. There have I Sherman asked consumers be induced by specific music temperaments. I would say that you know there's a lot of literature showing there's actually news that could generate epilepsy so there's some people when they listen to a Mozart Piano Concerto have teachers as well so. [00:56:37] Epilepsy tends to be a very individualized at all and she in terms of. You know what what what presents for every individual and music and sound can. Play a part. Right I remember saying yeah these are these are excellent questions made my job really easy because they were all all up there thank you but I think we need to wrap it up now so that people can get on to the next thing they know basis trying to get moving as well so let me think that I say again everybody is applauding you can see everybody's mental waves but everybody says it's really funny. [00:57:22] Deliberating I think it's really celebrates the diversity of neuroscience and its various applications that we have on campus and so I really enjoyed this thank you so much. And join us next I think. I just typed my e-mail address in the chat window as well so I would love to hear from everybody about your thoughts and about you know cob aeration opportunities thank you thank you Simon thank you darling.