Okay. Good morning, everybody. Thank you so much for coming. It is my pleasure to introduce Dr. Vivek Srinivas him. He is a leading expert in the fields of OCT of the eye as well as diffuse optics of the brain. He did his doctoral training at MIT, followed by a postdoc at the Martino Center at Harvard, where our paths briefly crossed. And then he went on to a faculty position at UC Davis. And now he is associate professor at NYU's Grossman School of Medicine. And I'm so excited to have him here today and to hear all about the cutting edge stuff he's been doing. So please join me in welcoming Dr. Srinivas. Okay. Thank you so much and I hope everyone can hear me in the back. If not, just let me know, feel free to stop me with questions. If you like. If this can be interactive in this big room, like it to be. So my work is in the field of biomarkers, biomedical optics and photonics. We have a lot of great faculty here and bio photonics, Aaron Hsu jaw, as well as taco rovers. So I'm eating with later. So the fuel that is well-known to the campus, basically the combination of biology and optics. And I were kind of more on the medical imaging, medical side of things. And actually this is really interesting to think about really how broader role optics plays in medical imaging. If you take all of the optics and including ophthalmology, endoscopy and lump it all together. Actually, the market is larger than what we think of as conventional medical imaging, being radiology, MRI and PET. So not to say that we're better or larger than them, just an optics is important and ubiquitous even if it's not recognized as a field. One of the themes that kind of ties through my research is that in this field we can use advances in areas like telecommunications. So these big economies of scale that have nothing to do with medical imaging that can benefit medical imaging technologies. So in my class I often ask students how large, how much do you think a 1,000 ft of single mode optical fiber costs any guesses? About $200 from a thorough Lab's website and likely even even cheaper if you buy it wholesale. Has Aaron mentioned, the work in my lab kind of spans several different areas. And so when we think about optics and light tissue interactions, typically divided into microscopy. Microscopy, no one says microscopy, they'll often say diffuse optics or near infrared imaging. These two scales are really determined by how the light propagates through tissue. So for microscopy, we're typically looking at the light that travels straight through the tissues and think of like a two-photon microscope. The light goes through the focus and comes back. Or is it goes to the focus, excites fluorescence, which, which comes back. So this light travels through more or less without scattering. On the other hand, a diffuse optics uses the multiply scattered light that penetrates more deeply in the tissue, but also with lower resolution. So one of the advantages of my research kinda working in different areas, I'm able to look at the same Oregon from several different perspectives. So this is an example of OCT and geography of the rat brain showing the microvasculature with a lot of detail. I'll actually come back to this picture in a moment. And then also recently as Aaron mentioned, we've also moved into applying some of the similar technologies or new technologies to look at human brain imaging through the scalp and skull. This would fall under the regime of diffuse optics where we're using multiple scattered light. So when gives us nice pretty pictures, the other images, we can get those blurred images in human subjects, which is really, really great. I'm gonna do a little bit of a dance back-and-forth between the micro and macro. My talk is focused on the human brain, but I'm also going to intersperse, let's say, some pictures, microscopic images from our work in mice that can help shed light on some of the brain functions that we're looking at inhalants. So my work, as it was mentioned, that falls into three different areas. We have a deep brain imaging side of our lab where it, Here's an example of imaging as a cell's cell bodies in the mouse brain through a relatively non-invasive preparation. This is in mouse you can image all the cortical layers and even some subcortical layers. We have a great deal of our work in AI imaging, which I won't talk about today, but hopefully can touch on with some of you in conversations if you'd like to discuss where it here using visible light OCT to make high resolution pictures of the retina. And finally, what the focus of my talk is is applying some of these interferometric tools to image the human brain. Okay, so I'm going to talk a little bit about OCT. This is optical coherence tomography because it's a tool we use in our lab, but also it gives some ideas and foreshadows some of the later work I'll be presenting. Optical coherence tomography is basically a technique that was developed as the optical analog of ultrasound. So the idea is we take a pulse of light and we visualize backscatter of fact, reflected echoes of light. Now, because light speed of light is very fast, we cannot actually do this directly. So we need to build what's known as an interferometer. Basic ideas, we have a reference path and a sample. We interfere light on the detector. And here's a simple illustration. We can see we have, when we interfere two fields on the detector. We have sample field, reference field, and then a cross term between them. That cross term is actually very important in the area of OCT this we can make the reference field very large. So if the sample light is small, we can boost it up to a level that can be measured with a relatively inexpensive detector. The second aspect of this picture, which I want to highlight is here I've shown an interferometer with a coherent light source so that if I change the path length, I'll get a series of fringes. So cute coherent light is very much like this laser pointer that I'm using. And if you're on Zoom, you'll just have to imagine the laser pointer. Now, if I take that coherent light source and replace it with an incoherent light source. So incoherent light is very much like the room lights that are shining down from above. Then actually I am able to resolve different depths in the sample. So here's an example of resolving depth in the sample with very fine spatial resolution by using incoherent light. And that occurs because light will only appear when the pads are matched to within the coherence like OCT, you can focus the light beam on the sample. One of the great aspects of this field is if you look at a typical OCT system, it has fibers, it has filters, demodulator. So these are all fiber optic components, telecom components that have relatively inexpensive economies of scale behind them. And as a result, this technology, since its invention is now widely used. So this is 1.5 billion-dollar market as of 2021, it's likely even larger today. Many of you, if you go to an ophthalmologist or even an optometrist is likely you've had a CT scan, see some nodding heads. Okay, good. So this is widely used and very useful and I think the rest of my talk, I want to come back to this example in terms of how we can use some of these concepts to build a brain imaging device. Okay, but let's talk about the human brain. So let's get onto the interesting part. So why measure cerebral blood flow? So the first reason is that the brain is about 2% of the body weight, but consumes 15 to 20%. The energy demand if it's perturbed and brain diseases. We'd really like to monitor this over days to weeks. So I have several examples here. Traumatic brain injuries, subarachnoid, hemorrhage, and stroke. So you already have a great faculty member here and Buckley who's already studying a lot of these conditions. So I don't need to make the argument for optics that the basic idea is that our goal is not to do better than what an MRI can do or do better than what a CT can do. The goal is more to go places where these conventional imaging modalities can I go? And in the case of, for instance, traumatic brain injury, which we're pursuing an argument that the goal is to monitor patients for weeks, days to weeks after an injury. Whereas with imaging device you'd have a simple, simple snapshot. With optics, we can monitor for long time periods and tailor the treatment specifically to the patient's. Right. So the second reason to measure cerebral blood flow is because it's a great reporter at brain activity. This is a bold functional MRI. Already have in shallow kilohertz faculty member who is really getting at the fundamental mechanisms of the bold signal. But basically it's a bloodflow signal. And so by measuring blood flow, we can actually infer what's going on with neuronal activity. Okay, So there are a number of surrogates for a brain monitoring, all of which are imperfect in some way. Transcranial Doppler measures, velocity, perfusion, MRI, and CT we've talked about. There's another, a number of other surrogates techniques which I do not directly get at cerebral blood flow that we'd like to measure. Now, as I mentioned in my group, we use OCT to image the mouse brain. This is an example, excuse me, if the rat brain with the skull and the dura removed. So this is a relatively invasive preparation, but it gives us a beautiful pictures. The microvasculature you can see here the arteries, which arise from the great cerebral arteries that supply the superficial cortex diving down into the tissue, the veins coming back up and we kind of have this mess of capillaries, spaghetti, which is actually where the oxygen delivery to the tissue takes, takes place. However, our goal is to measure the human brains. So if we think about the penetration depth of OCT, that ballistic light, it's really only going in millimeter or 2 mm. Now looking at this far corner picture, we really want more penetration depth to get through the scalp and skull to reach the brain. So we have to use a different approach. This approach is known as a diffuse optics or near infrared spectroscopy. This is a great field, is great wide utility. It's perhaps because of its simplicity. So they own all you need is the source and detector. And you can measure the light that travels between the source and detector. So very simple, very useful. But with that simplicity comes at a cost. And I really wanted to highlight two issues. So the first is the source and detector are separated. So because the light has to travel between the source and detector, the spatial resolution is perhaps compromised. Want to go to larger separations to get deeper. The second is, you can see that light has to travel through the superficial tissues to reach the brain. Because of that, the scalp and skull have vessels. There's blood flow in the scalp and skull, that bloodflow fluctuates according to the vagaries of systemic physiology, which might have nothing to do with the brain. This is a contamination on, on top of the brain signal that we'd like to measure. I would say this issue of scalp contamination is not often mentioned when we talk about this, but when you think about it as people who do diffuse optics like this is the problem in the field. So if we could solve this problem, or at least if not solve it mitigated, we can really extend the range of applicants applicability of these technologies. One other approach, which I'd like to mention, it's also diffuse optical approach is called the DCS. This is developed in at UPenn. Originally. The basic idea here is I take a laser light very similar to this laser pointer, shine it on the brain, and observe the light fluctuations that occur when the light scatters off moving red blood cells. And so basically, slow fluctuations means less flow, higher fluctuations, more flow. We can quantify this by an autocorrelation function that tells us how fast the blood flow is moving. There's a number of models that allow us to do that and you come up with the blood flow index. This is a basis for a number of work. This is a growing field. Aaron has made great strides in pushing this technology into the clinic and doing it in patients. Which is very impressive. And so this is already an area that has had impact. One of the things I like to think about. So again, going back to the micro scale, we can also look at the vasculature from a standpoint of a diffuse optic system. So this is actually an example of the mouse. This is an image of the scalp, excuse me, of the skull and the brain. And I've using a OCT and geography, so enhanced contrast from moving red blood cells. But basically we can see the superficial vasculature is kind of a net or a meshwork on top of the deep brain vasculature that we'd like to measure. As I mentioned, the superficial vascular tree, it's a noisy. So as neuroscientists, I assume we're not interested in the scalp and skull, we're interested in the brain. So the real problem in that field is how to, how to deal with these. One of the great advantages of DCS signal is that it's less sensitive to the scalp. So here I'm showing a plot of the brains of scalp sensitivity as a function of the source collector difference. So with the conventional near signals which are based on absorption as shown in the black line. We can improve a little bit by going to high source detector separation, but we really never get above 40% brain the scalp sensitivity ratio, which means we're actually more sensitive to the scalp and the brain. With the flow signal. With this fluctuation signal, actually we do much better. So the red line shows the DCS signal. We can see that I'm actually at the larger source collector separations. We can go get even higher brain specificity. And in fact make our measurements more sensitive to the brain then to the skeleton. However, current DCS measurements, as for the reasons I've mentioned, are actually limited to short SC separations. We'd really like to go even further. So how do we do this? One example of why this is a key question. This is taken from a DCS study that was done in. In patients, I think in the neuro intensive care unit, measurements of cerebral perfusion pressure, intracranial pressure. At the same time, the goal was to look at autoregulation. So if we look at one epoch, we can see that as the arterial blood pressure changes, the cerebral blood flow is more or less constant. This is a marker of intact CBF autoregulation. If we look at another epoch, we have now a case where cerebral blood flow changes with our mean arterial blood pressure. So the question is, is this impaired CBF autoregulation, or is this a simple contamination of the superficial tissue onto the d? This is a key question we'd like to solve. A DCS, as I mentioned before, we're limited in the source detector separation. You can get more detectors, but these are relatively expensive detectors and difficult to scale. Although some groups have gone this direction. Our idea, which we've tested about five years ago now is to use an interferometer to make us more sensitive to the deeper light. We add a reference arm to our near setup, just like we do have an OCT. So if we have a reference arm than the sample arm becomes the object or the brain. And I will argue this is a result. The result is a new class of methods for making non-invasive in vivo measurements of the human brain that are both more informative and less expensive than other techniques. Again, we get this benefit. Now we're multiplying a week sample light-filled by a strong reference fields. So we can boost up that signal. And we can use less expensive detectors. So I will show progressively moving towards cheaper detectors, cmos detectors, even eventually, detectors that are similar to those used in your cell phone. And now if we can make each of those pixels as sensitive as a photon counting detector, we can have scale the measurement accordingly. And then the second which is kind of interesting is we can use some of the low coherence concepts we borrowed from OCT to give us time of flight resolution, which will allow us to distinguish layers more directly to the actual setup. So this is the basic idea. We have a reference and a sample arm. We interfere the light onto a cmos camera chip. For the first studies, we used a fairly expensive camera chips. So this is around $5,000. It's a line scan cmos camera, but with a very high frame rates. We need a frame rate in the range of hundreds of kilohertz in order to sample. It was very fast fuel fluctuations from the brain blood flow. This is a post-doc when Jin Jo in the lab, This is the first system he developed. I hope the movie plays. Okay, This was obviously done a davis, As you can see. One of the interesting aspects of this invention is we can do it with a light on. So when we apply a very strong reference field to boost up the sample signal, we're actually less sensitive to ambient lights, so we actually don't worry too much about stray light affecting the measurements, which is not the case with photon counting. Okay, so this is during the pandemic, as you can see, blood flow index is shown up here. This is a pulsatile blood flow average over time. We can get the heart rate from that and cetera. Okay, So now to the limitation of the field, so with this technique, we're actually able to push the source detector separation. Again, each of those pixels is now acting more or less like a photon counting detector. And we have many pixels, in this case 500. Okay, so we can now push beyond 3 cm to run 4 cm source detector separation shown at the bottom, we still get reasonably good pulsatile blood flow. And I'll mention even we can push it up to 5 cm source detector separation, although we do run into problems with sampling of the cameras. So we decided not to go up behind practice. And importantly for us, as we increase the source detector separation, we get more sensitivity to the brain. I did mentioned it before, but the brain blood flow is about six times higher than the scalp blood flow. So as we get more sensitivity to the high flow rate region, we should expect the blood flow index to go up. And that's actually what we see. We have any technique that how do we validate this? I'm difficult to validate. As I mentioned, there were, there's no real gold standard for measuring cerebral blood flow continuously, but we chose a test of cerebrovascular reactivity or using a hypercapnic challenge. So carbon dioxide challenge is it dilates the blood vessels, increases cerebral blood flow. Here's the protocol. So here's just skipping ahead. Here's the final results. So we have end-tidal CO2 going up and we see a corresponding increase in cerebral blood flow. Every time we have a challenge with CO2, we can see blood flow increased. Now this is in human subjects, I should mention. Okay, So just blood flow increases during hypercapnia, but other aspects of brain measurables change as well. Now, going back to a mouse models, so this is now just showing what's happening to the brain. Vasculature zoomed injuring hypercapnia. We have an artery coming in and read, I should've mentioned this is using visible light OCT with oxygen saturation. We have an artery coming in and read and the vein draining the cortex and blue. When we apply carbon dioxide, Let's neglect changes in oxygen metabolism if they occur for a moment. Now with hypercapnia, we have an influx of cerebral blood flow. The blood flow will increase the oxygenation. So the arterial oxygenation does, it doesn't change too much. But if we look on the venous side, this vein here, which was previously in blue indicating a low oxygenation, is now in red. And arterial I should mention also dilates. This is the reason for the blood flow increased blood flow coming in. I have a question. Yeah. Go ahead. Is there shunting between the things that happens or is it just so oxygen extraction decreases because blood flow goes up in metabolism stays the same. At all levels. Oxygen. Are there arterioles that goes straight to venules? Are there is it like that is a good question? I'm going to say. I mean, obviously there are exceptions, but that is not the standard route of here. Yeah. Yeah. So I mean, I think this is a real question of oxygen extraction decreasing, so there's more flow going in. I think the majority of the change in veins is due to decreased oxygen extraction is oxygen shows up. I'm just trying to think in the mouse if there's shunting, I'm going to say probably not, but someone might correct me if I'm wrong. Okay. So we have absorption change that goes along with this bloodflow chains that we can also measure. So this is an example with our system. We can also measure like the conventional mirrors signals. We can measure an absorption change, which is kind of a convolution of the oxygenation as well as the blood flow changing. And we can plot that as well. So we see here actually, if you squint a little bit, you can see there's absorption change in red and a blood flow change in black. The absorption chain kinda lags the blood flow changed by a little bit, which you would expect from a washout effect. But now we can do this across many subjects. We can plot the end-tidal CO2 versus the different potential biomarkers. So we'd have entitle CO2 on the x-axis. Blood flow change versus the absorption change. We can see that blood flow is actually much more tightly coupled with with our hypercapnic challenge. If you look in the cerebral blood flow literature, you expect about a four to 6% change in blood flow per every millimeter of mercury end-tidal CO2 change. We do see a fairly linear relationship, which is a good a good check, sanity check on our measurements. Now if you look at the absorption, this is now near signal. What conventional mirrors measures we don't see as tight coupling. This could be because we're not measuring directly oxygenation or blood volume, but some convolutions are too. Another reason is that near signals has more scalp contamination than brain contamination. As I mentioned, the fact that the scalp is kinda doing its own thing during these measurements may reduce the tightness of this correlation. Okay, So now the second validation which we can do is to look at now a brain activity measurement. This is a mental arithmetic task. Again, I'm showing the blood flow index measurements on the left and the absorption measurements on the right. So this is the BFSI or CBF measurement. And our absorption measurement on the right, which is more like a hemoglobin kinda measurement. We see that during this mental arithmetic task, we see a very clear, I will say very clear from my point of view, we see a blood flow index increase on virtually every trial. Whereas if we look at absorption, we see that the story's a bit more complicated. So again, this is a good validation, not a perfect validation, but a good sanity check that we're able to do some of these measurements in human subjects. And I should mentioned this was at 3.5 cm source detector separation. Okay, So just now going back to the, to the microscale, just because I want to keep people awake with some nice images. So this is now an example of what's happening to the brain vasculature during an activation. This is now in a mouse with during four pi activation. Let's not worry too much about that. The main thing I want people to see is that during the stimulus, which is 0-2 s, you can see this vessel along the top kinda increase in diameter. There we go. It happened. Okay, So this is kinda thinking about at them, at the microscale. What's driving these signals that we're measuring? Well, it's really neuronal activity causing arterial dilation. Creating this influx and cerebral blood flow. Then we can also do hair monitoring. This is again, one of the things that people in the field don't talk about too much. So I will say like within regions with hair. This is an example of motor cortex. We see a difference in size between Contralateral and Ipsilateral finger tapping experiments. This is still only a 2.5 cm source detector separation, so the technology still has a ways to go. But on the other hand, we can monitor and brain reaches with hair, which I think is a good advance. So it's both the motivating, but also saying there's that further improvements are needed. To get those further improvements. We next ask the question. So we started with a high-end camera where we're sampling these very fast field fluctuations rising from blood flow. Is there somehow a way we can take what is an existing economy of scale from cmos camera technology and use one of those cameras to make our cerebral blood flow measurements. So here's just showing the growth in cmos cameras. I would guess there's at least two times as many cmos cameras in his room eyes aren't people passively, possibly more. So this is a widely use device. So the answer turned out to be yes. The same postdoc who did the previous study. He discovered that if I vary the exposure time of the camera, I can actually sample the auto-correlation function in slightly different ways. So I'm not actually measuring the autocorrelation functions or actually I'm measuring a weighted integral of the autocorrelation over the exposure time. And so by varying the exposure time, I can get information about how the auto-correlation function is changing. And so he's able to do these measurements. This is now with a 2D camera with megapixels. And he did to exposure times. One of the big advantages of this approach is that now with a 2D camera, you can now image different sources and different detectors are detecting positions onto the camera to take advantage of the two-dimensional array. So we looked at a long and assort short source detector separation and we measured, this is an example of measuring the blood flow index string breath holding. At the short SE separation. This is 1 cm. This is pointing out that we get a lot of signal from the scalp, particularly during the breath-hold. And he showed at 3 cm, he's measuring brain and scalp and he did some regression shown on the lower right in black to just isolate out the brain signal. Okay? So with this approach, we can now say we are able to get a four order of magnitude potential reduction in the cost and the performance to cost. Okay, so on the y-axis here we have cost. We have an MR scanner. I actually don't know how much fun cause I just assume it's $1,000,000. Signs. Really expensive. Yeah, exactly. Okay. So that's kind of a point of reference. And then we look at the different technologies. So with any of these optical technologies, we can always find more detectors. Write the name of the game is to really make the detector itself less expensive. With the DCS technology, which again is the gold standard, had been incredibly useful already. It is difficult to scale. So if you just buy more EPDs, this is how the technology is going to scale around the place where you get, I don't know, like maybe 1,000 detectors are thousand channels, you're around the cost of an MRI scanner. Alright? So at some of these new technologies now we're using pixels instead of detectors. We can reduce the scale. So it's still expensive, right? Once you scale, but you can see now we can get with our previous approach around 10,000 channels before we get to the cost of it on our scanner. Now with a multi exposure approach, we can even get to 1 million channels and still be less expensive than, than an MR scanner. To me, as an engineer, this is really exciting. So whenever you've improved the cost and scalability, you can now ask the question, okay, So now I can make a device that is relatively cheap, maybe does something similar to this, yes, but can be used in settings outside of a hospital or a medical environment? I'll talk about that more in a little bit at the same time, I can make a high-end device that might perform better, might measure more brain regions or measure it at larger source detector separation. So we'll say, although we haven't solved the problem of scalp, but I think we've improved the situation considerably. So taking a little bit of a different direction now. So the next question I wanted to ask is, now we have this kind of interferometer setup. Can we get other kinds of information from the interferometer? Here? One of the useful dimensions of information that has been explored in the field is time-of-flight resolution. The idea being that the longer times the flight will travel proportionally more through the brain and less to the scalp. And so by measuring time of flight, we can distinguish some of those signals. Here. We don't actually have a pulse laser. We have a continuous laser. And we found that by using a very similar approaches is using OCT. We tune the laser in time and change its wavelength. We can actually encode different depths as different wavelengths, different frequencies of the optical signal. So by tuning the laser, we can generate an interference pattern. So the scalp interference pattern will have some frequency. Then the brain interference pattern, we'll have another frequency. Then by reconstructing through a Fourier transform, we can obtain a time of flight and resolution, or T PSF, temporal point spread function, which characterises the time response of the tissue. Now, this is now one of the advantages of this approach is we can now get time domain information in terms of a time resolved measurement of the tissue bed at each time of flight. We can also get like a DCS like measurement. At each time of flight. We can also measure the light fluctuations of the light that travels through tissue at that particular time of flight. And I should mention there are other approaches in the field such as time-domain DCS, which, which tried to do similar things. Now, some of you, I think maybe a few of you who have some background in optics might say, Hey, this is just swept source OCT. Actually it's true, it's very similar to swap source and CT, but we should think about the timescale of these measurements. So for nearest measurement, what we're really measuring up to nanosecond timescales. With OCT, we typically only measure up to around a ten picosecond time scale. So the timescale is much, much broader. And so the depth of penetration that light has will be much greater allowing us to travel to the brain. Okay, so now we take this time of flight resolve measurement. We can now look at different media. Some relatively unexciting, immediate to look at what one model and the field is intra lipid. We can see that the decay rate increases linearly with time of flight. This is expected based on a theory. Look at the mouse brain. To the mouse brain, the decay rate or the autocorrelation decay rate also increases linearly with time of flight. This is again expected for a uniform homogenous media. As it turns out, the superficial tissues in the mouse are relatively thin compared to the brain. Now, applying this to the human head, the first thing we notice is that now the decay rate versus time of flight has a relatively non-linear characteristic. Actually, I would say it's a try phasic characteristic where the early times of flight have one slope than it appears to decrease or the slope decreases. And then at a later time, the flight where we're sensitive to the brain, the slope increases. We puzzled over this for a little bit. What would possibly give us this kind of behavior? We looked through the literature, it's turned out that people have argued or debated the merits of different models for modeling the extra cerebral tissues and diffuse optics. There's a two layer model which says the scalp and the skull essentially have the same blood flow. And there's a three-layer model which segregates the scalp and the skull with the scalp having a higher blood flow either in the skull. We fit this model is to both of both of these models for our experiments. And we found that the three-layer model gets a remarkably good fit to the data, whereas the two layer model does not. So this says at least to us, that the brain, brain scalp and scholars best modeled by a three-layer system as opposed to two layer system. And I think this is very important even for the conventional techniques that seek to get quantitative information from a light measurements to know exactly how to model the superficial tissues. Okay, so the last idea that I wanted to present, which is building on this, again, is shown as a way to obtain massive parallelization of the measurements using cmos pixels. I showed a way to get time-of-flight information. Now, is there a way we can kind of combine these two approaches together into one single system. Get both the time of flight information as well as the parallel detection. So to do this approach, we went back again to the basics of OCT. So the idea is we want to create a low coherence light source. But if you think about the timescales involved, you actually need coherence linewidth around a few gigahertz. It turns out in sources don't really exist. So we had to make one, and the way we made it was kind of interesting. So we took a narrow linewidth laser. We tune the wavelength very fast and time we said, okay, we tune the wavelength very rapidly on the order of megahertz and time, we generate what looks like. It looks very much like a low coherence source. There's a power spectrum shown on the right. Now that power spectrum will have a coherence function as shown here. So very similar to how OCT uses low coherence light to generate images. Here we're using. Effectively low coherence light source to filter out photons that travel deepening the tissue. Go through some of the math. I think that's probably less important. I mean, the basic idea here is if you look in panel D, I start with a distribution of times the flight shown in blue. I apply a time of flight filter shown in the black dotted line. And the filter time-of-flight distribution is shifted to the right and time of flight significantly. So that means that I'm actually getting deeper photons by applying this time-of-flight filter. Okay, I'll skip that, skip over that a bit. I mentioned some people may have heard of time domain years or time of flight years or frequency domain years. It's actually kind of a different approach. I would say. It's maybe even a different class of approaches that uses a variable time of flight filter if you had time of filing permission. But again, we get to the problem of how we validate this. So in addition to doing all the other validations that I mentioned, I wanted to highlight something, a pressure modulation experiment. Here we basically say, Okay, we know our scalp signal is a confound. So if we squeeze on the scalp and we're getting a lot of blood flow signal from the cell, reduced blood flow in the scalp, that blood flow signals should go down. If we're getting less of the scalp signal, that blood flow signals should go down less. And so here we ask the question, can we measure the reduction and superficial contamination from the scalp by Vice, basically just squeezing on it. Okay, so we took a pressure modulation experiment. We did an approach where we turn the time of flight filter on and off very rapidly in time. We shifted from measuring with CW, which is continuous wave, to time of flight filtered, which is deep protons. So we did superficial photons and deep, and superficial and deep, switching back and forth every second. And so we're able to get a curve like this during a pressure modulation experiments. So again, I'm squeezing on the scalp, right? So 1 cm source detector separation. So the blue line shows the kind of conventional measurement that when I squeezed on the scalp, I reduce blood flow to almost 20% of its value. Which says, really most of that signal that I'm measuring is coming from superficial tissue. If I turn on my time of flight filter is again the same experiment because I'm switching back and forth. I measure a significantly reduced decrease. So in other words, I maintain that blood flow level during the scalp occlusion. So we can see other features here like the release after occlusion, you have fresh blood flow coming in. If you if you're just squeezing on the scalp, you presumably don't occlude the grain on this. You squeeze really hard. We don't expect the brain have any response. So any any increase in blood flow after the occlusion would be presumably do that in response to the scalp. So we can see that the scalp response is large with a continuous wave or superficial photons, but it's reduced with the deeper photons. And we can do these measurements across subjects and show me have a significantly reduced perturbation caused by the scalp. Now one of the great advantages of going low source detector separation and using time-of-flight filtering is we can improve the spatial resolution of the measurements. The last thing I want to show is this concept of high density forehead mapping. Now we're down to about 1 cm resolution at the brain. So for optics like this is pretty good. I know that the MRI people may scoff at this, but again, we're not we're not trying to do what MRI does. Okay. We're trying to do it in a more cost-effective way. We can do imaging of these different parameters via the human forehead. Actually, what's interesting is this light intensity ratio. If we compare continuous wave and tidal flight filtered light intensity, actually get a very symmetric distribution which I find to be interesting, symmetric across the two hemispheres, which says there's some kind of a structural symmetry between the measurements. I'm getting something from somebody. Yeah. Okay, So future directions of this approach, so we've shown we can improve this scaling of the blood flow measurements. Now, of course, we're interested in, in your intensive care and rehabilitation. We have a project, particular project with a traumatic brain injury, but there are certainly other applications that are very interesting. But now we have a scalable technology. So there are other kinds of questions you want to ask her. Can we start to monitor this outside of a hospital setting, for instance, in the doctor's office. It's a lot of questions about cerebrovascular health and Alzheimer's disease or other vascular dementia is. Some of these tests are reactivity could potentially be done with a less expensive instrument. Maybe you don't need quantitative measurements, maybe just look at changes. And then even further, can we do this monitoring at home? So with cell phone, camera, sensor, it's feasible to start thinking about can we, can we start to measure cerebral blood flow? Even even at home or even outside of a doctor's office. And then can we tie the loopback with preventative measures sense, so can we use some of these data to develop preventative approaches to stave off some of these debilitating diseases. So in summary, does interferometry help diffuse objects? So I've given you a positive viewpoint on this. I believe it should improve the brain to scalp sensitivity of blood flow measurements, particularly in cases where we're photons starved. I think it will help. We get some interesting ways of getting time-of-flight information, which is interesting for just understanding the extra cerebral issues and how they contribute to blood flow. And another question which has come up quite a bit is, can this technology or its variants, can compete with high-throughput continuous wave nears for functional studies and brain-computer interfaces. I think here the answer is possibly, but it will require further advances. On the other hand, we are making advances every year, so I am optimistic that it can still be, still compete with continuous wave numbers in terms of scalability. More broadly, biomedical optics, this field spans from macro to micro. I hope I've given some sense of the flavor for the different techniques that can be used. Aids both basic science discovery as well as clinical diagnosis. And one of the keys from my research is to think about how we can use larger economies of scale from communications or consumer electronics to advance some of these medical device ideas. And there are numerous unsolved and challenging problems for a physicist and bioengineers. So lastly, I'd like to thank my research group and funding sources and I look forward to questions. Thank you. Right there. And I stock We're gonna be meeting later, but I think there's a question that I think it's worth putting up your entire room, just getting into the weeds a little bit in terms of the signal that you're extracting with your interferometric approach. Are you using only the intensity fluctuations over time? Are you bringing in phase differences? What information do you get from each one of those different sources? And if you're not, what opportunities do you think there'll be by combining both the intensity and the phase fluctuations that you get in your interferometric signal. Yes. So there's something called a cigarette relationship that relates the intensity to fit phase intensity of the field fluctuations. So if the seagull relationship holds, we're not really getting anything fundamentally new. Although there aren't interesting regimes where the secret relationship does not hold, where there's less light scattering, maybe few scattering events. I will say generally, the signal to noise ratio is better for the field measurements in terms of the photon efficiency. But field measurements are sensitive to phase noise. So that potential issue. And so we do expect that these measurements, and we've seen it that these measurements are a bit more sensitive to, like stomping on the graph than, than, than a DCS measurement. It's all intensity based then I'm sorry, we are using the field, we have no choice. Okay? So we are sensitive to phase noise. Got it. Yeah. Yeah. So in terms of like patient motion, if the patient is moving, I think that the motion is similar and that's also what we've seen in the collaboration with origins group. But if the nurse comes in and stops on the ground, then you're probably going to see more noise in our system than in a DCS system. It's like the room vibrates. Yes. And that has to do with more with how we build our interferometer. And that could be solvable with engineering approaches. We're just giving you, I'm giving you my observation based on what we've seen so far. Because that's kinda tricky to know when legalese, when a patient the patient might be hurt by versions of the bed? That yeah. Yeah. Yeah. So I mean, it's it's something okay. So right. So here's the way I'll put it. Right? So you could make the same criticism of OCT, that's also an intern for metric system, yet it's used everywhere. But they also use phase information. We're downward, yeah. Wonder if they extracted for different information. Should they might be related. Snr. We can definitely talk about that later. Yeah. Yeah. Yeah. Mike is not working. So if anyone has any questions, correct? Yeah. That's a good question. I don't think I said put it on the mobile phone. I don't know. I have it seems like everyone wants to put everything on your mobile phone now, this may not fit on your mobile phone, although he could certainly processed data and we use a sensor like that. I mean, the idea was to use the economy of scale that's creating your mobile phone to make a device. Okay? So I mean, I think the idea is really to be rigorous about this and validated in clinical settings. First, That's the academic and me kinda, kinda speaking. But I think for a lot of these applications, if you're talking about commercial applications, it's tricky. I mean, companies have been, been down this road before measuring cerebral blood flow and it's not been a good good road to go down. But, but for me, I'm excited about just the scientific implications of the technology. And I think when there's, when there's a real validation acceptance, I think those other approaches, other applications, like putting them in other places, other settings will open up. But I think right now, I think basically what Aaron is doing is the way to go about it. Yeah. Okay. That's a good question. So I mean, I think there are a number of sanity checks that I would do. We've done some of them. I think transcranial doppler is another good sanity check. I mean, I think if you ask what's the gold standard? It's like profusion, MR. Or CT. It's a bit challenging because we're measuring. Is that not the case? Okay. There's a drawback. Curious what your thoughts are. Yeah. Yeah. So right. I mean, I think that's a good directions you're pursuing. There is no ideal validation, right? So they said there's no, there's no real technique that measures cerebral blood flow. I mean, even some of these MR. techniques, they're measuring plasma essentially and correcting for hematocrit and you're measuring red blood cells. So even at that basic level, like the core CDF techniques are not really measuring CVF. I think it will be a question of a body of evidence that accumulates over time. And then people will start to accept it and soon they won't even care. Has this been validated, that the measurement is just useful in its own right. But I think maybe we're not quite there yet. Diffusion MRI is also super easy. Having read your latest optical fiber, What's your capacitors? Camera? Like? This one is at 333 khz. Just catching the tail. So, right, so there is some decorrelation over the question. The question is about the sampling of the auto-correlation function, which I think we should also mention is the limitation of this technique because DCS, with a photon counting, you can, you can achieve arbitrarily fine sampling of these fluctuations. Whereas if you have a camera, you're kinda limited by the sampling rate that the camera manufacturer provides to you. That is a limitation and that's one of the reasons we're not pushing to further source detector separation. It turns out with the time of flight filtered approach. And with filtering you actually, the decay rate goes down a little slower. Decay rate when you use time-of-flight filtering for the same level of brain specificity. So it does help a little bit, but that is a concern that will need to be worked out when ordering information. Yet, I thought we definitely, I mean, I think the scales are just so different. It's hard to get useful information. So I mean, ideally you'd want, you'd want to measure the scalp with an exact same weighting as what your deep measurement of the few slide is giving you. And that's just very difficult to do with any technique that I know of. But having the ability to isolate early times of flight is actually pretty good because you're using the same source and detector. You're just getting early times of flight and late times of flight and you can use them to regress, regress out the superficial tissues. Well, thank you so much. This is a wonderful Thank you.