So thanks very much it's great to be here I'm going to talk to you today about some work we're doing in my lab that relates to kind of this issue of behavioral medicine so this is an aspect of of health and medicine that is based on the behaviors of an individual that affect their health outcome and so just some motivation for this focus so so increasingly in our health care system we face a burden of disease that really comes from essentially behavioral sources so you have behaviors like smoking that have long term health consequences that we all know about we have poor diet and physical inactivity which is another source of challenges as far as health outcomes where you just see the kind of public health analysis both in terms of the kind of the dollars in terms of health care system and also a measure of kind of unnecessary death preventable death that results from these these health related behaviors that are causing adverse outcomes and so so this is kind of a been known for some time and among this long list of factors kind of smoking in sort of poor diet and physical inactivity these are two of the biggest sort of factors in terms of their contribution to negative health outcomes and what's important to notice here is that in many cases the individuals that are producing these behaviors have a desire to change their behavior so so so smokers a large number of smokers each year engage in a quit attempt that unfortunately does not succeed and people have a desire to get more exercise eat more healthy diet but then in the stress of their due lives they're not able to accomplish that and so and so the opportunity we have here is to think about how on body sensing other types of abilities to collect behavioral data in an automated fashion on a large scale. How those could provide a new set of tools so a new set of ways that you could help someone bring about some change in their behavior that they're not able to accomplish completely on their own and so that's a big area of research focus for us another place where this comes comes up in the context of developmental disorders so children that have a condition like autism for example if they're sort of left to develop in a typical fashion are not going to acquire the skills they need to succeed in life and so there's a desire to identify those children that are at risk and then begin early in intensive behavioral interventions with the goal of bringing about a change in their developmental outcomes and this is another case where it's the behavior of the individual in this case the child that ultimately is going to contribute potentially to an adverse outcome and autism and other childhood development conditions they receive a lot of attention because of their kind of growth in prevalence and the fact of their driving cost as a consequence of that and in the case of autism there's actually been a lot of room for optimism so there's been some studies now that have shown that early intensive behavioral interventions actually do have an impact on outcome and there's also been progress in laws on a state by state basis that require insurance providers to cover autism treatment and so it was Georgia join the ranks of those states a few years ago and now that's created an opportunity to receive treatment because it's now reimbursable that's created a growth in this in this market for autism treatment but the challenge here and it's a similar challenge in the case of adult behavioral interventions is how to make this a little how do you ensure that all the folks that need some kind of intervention are going to be able to get it and not have it be driven by the number of people that are available to provide services because that's inherently limited and then there's a big problem in general in the space of understanding whether or not a treatment is working because if you're thinking about behavioral change those things tend to have happened over a long period of time and sort of relapses and other things are very common and so understanding what is the status of the person terms of their behavior they're also lead to behaviors their mental health behaviors understanding whether the intervention they're getting is working for them is actually a real challenge and so the goal of my lab is to look at using technology to address these questions and so one way to think about it is that it's all about behavior change so how to understand the behaviors that are being produced the things that you want to bring about a change understanding the kind of the the the antecedents for a behavior you know how how is it produced so where and when is it produced. And then understanding sort of under underlying a kind of causal modeling so why you know why are certain things happening why is someone having difficulty making the right things happen in their life and the goal of all of these research questions is really to provide tools so a better set of tools that will enable a parent that has a child with autism to participate more effectively in that child's treatment to allow an individual who is a smoker or a lifetime smoker who wants to quit to really have a better shot at a quit attempt that might succeed and then be able to maintain their abstinence over a longer period of time than they could do on aided So these are kind of the goals that really motivate the work in my lab. And so then just to add that if there's any questions or comments or if there's something you violently disagree with me about feel free to jump in at any point I'd love to have your your questions and your comments any any comments or questions so far I'm just just curious as a as a sample how many people know someone either with autism or someone that has a child or a or relative with autism. So if you people you know so that's that's pretty pretty common OK so let's go forward and just kind of start talking about some of the technology so there are some you know interesting things about AI that I has a chance to have an impact in the space and I think in turn a kind of these health applications can actually help us to drive AI in certain directions as you see here so that this is not just a kind of a public health kind of a good goal to work on these topics but they actually will drive our technology in interesting ways and so you can ask you know what is sort of needed to accomplish these goals and I think there's at least three things that are important so we need new sensors we need better ways to measure the behaviors that are important and especially doing this in daily life environments not just in laboratories and then we need better methods or ways to predict what's going to happen identify sources of risk and then we need platforms that will allow these technologies to be deployed on a large scale so if they can be available to many people and so we've done we're doing work in my lab that addresses all three of those those needs and I'll just illustrate each of them in the rest of my talk by just giving you an example of some but I think is a politically interesting example of each of these each of these requirements so I want to talk to you first about sensors so it's a very exciting time to be in the in the sensor data you know analytics business because the number of sensors is growing at a phenomenal rate their adoption is growing at a phenomenal rate and so we're we're drowning in data to some extent and that's an opportunity you know that we can take advantage of and so I want to focus your attention on one particular kind of sensor which is less common and is still I think in an early stage of kind of adoption and development it has a lot of potential to be really useful in these sort of health related areas and this is where we'll wearable cameras to let you get a sense of what's called a person's visual exposures and what are they seeing as they go about their daily life what kind of inputs are they receiving and then also you can use the sensor to determine what's happening so what are people doing what kind of things are they involved in and this is a new way to get this kind of data that's just becoming. Kind of feasible and so there's a lot of these platforms that are coming on the market that essentially incorporate a camera into some kind of a wearable device and some of these you know are no longer on the market and some are doing fine and some are unclear but this is really a very dynamic area and so we were involved in my lab with a couple of P.C. students in kind of helping to create this the subfield of computer vision so computer vision is all about analyzing imagery to learn about the world and we created this kind of subdiscipline called first person vision which is really analyzing this video that is coming from a wearable camera so a video that contains and images of the world captured from this first person perspective what is the what is the first person seeing and so I just want to share with you some some statistics here so there was a study done recently at CMU looking at these very short recalled micro videos the kind of things that you might exchange with Snap Chat or or Instagram for example and a survey that they did show that about twelve percent of these videos are coming from some kind of head mounted or wearable camera system you can see examples here and the exciting thing is that these are not just the things you would you would expect like on some kind of extreme sport like whatever that person is doing that I'm definitely not going to ever do that whatever that is but it's also more money more mundane things like you know having a meal you know interact with with friends and so I think you know this is very exciting to us that people are willing to sort of capture these daily life moments they're willing to share them and in doing so they'll give us this unique lens into this behavior kind of how is it being produced what are the things that affect it and this in turn will hopefully allow us to do a better job of understanding these behaviors and providing new sets of tools So what can we learn from these kinds of videos so there's lots of ideas you can have a ton of them I think pretty quickly and so one set of questions has to do with simply what's going on so what is actually happening what is the purpose of doing and here you can see some examples of some sort of health related answers to those questions there's lots of other things that are interesting to talk about from sort of the non health perspective. And what I want to share with you is really now it is a technical story which is how is of this modality of first person video is really useful and what is that distinguishes an analysis of these videos from other kinds of video analysis that you might have seen before and the key point is that this camera because it's physically on your body that's the magic that is on your head then as you go about your daily activities as you move your body through space as you attend to different parts of your environment you're taking that camera with you and so that camera is being moved around in the scene and it's following essentially your own body movements as you as you have different goals and then you accomplish them and so we have this implicit record in this video of how a person decides to approach their environment what they're doing to accomplish their goals and so there's been some work to my student there is a the Indian Lee have done to identify some specific cues these are specific kind of signatures if you will in the video that give us information about the participant and so one set of cues has to do with motion as you move your head around the camera's moving and that video motion that we can easily extract with computer vision algorithms tells us something about how the person is allocating bear their gaze and their attention in space there's also the hands which tend to be very visible when you're looking forward at a person's workspace you see their hands a lot and it turns out that we can find the hands very reliably and use that as a cue to understand what people are doing and then and then if we have other kinds of hardware on the glasses we can do more than just capture the the first person view of the scene we can actually capture the eye movements and I'll show you some examples of how we might use that going forward so here's just an example of what I'm talking about so this is some ability in real time to pull out a hand mass So this is the mask that segment's the person's hands in the image we can get these masks very reliably and this allows us to have a very powerful cue for understanding what is a person doing with their hands and this of course captures a large set of things we do in our daily life that describe our activities. And this detector is quite accurate and certainly good good enough for use. Another important signal is the gaze and so here you can see the result of gaze analysis being done by a pair of wearable wearable glasses the track I movements the colorful little donut there is and is I'm movements measured by some cameras that are in the frame of these glasses looking back at your eye. And so one of the earliest piece of work we did in the space was a paper where we show that we could actually get useful information about the participants attention visual attention without actually having those glasses at all and so I'll just show you a result here so this is a typical kind of video clip in our aware home lab a Georgia Tech remakes and scrambled eggs as part of our research activities. And so then what you see on the right is two outputs so the green circle is the output of those eye tracking glasses that I just showed you so that's a kind of gold standard measurement of where the person is looking based on literally tracking their eye movements over time and then the colorful little donut pattern that you see there is the output of my student's algorithm and that algorithm is making an estimation of where they're looking that doesn't utilize their eye movements at all so it's not looking at their eyes it doesn't use the other cameras it just uses this video here on the left just these pixels but by analyzing them appropriately we can actually predict you know where they're looking and so to see how you do it you can imagine that the reason this is possible is because your your activities your your attention is all coordinated so your brain is coordinating your eyes your head and your hands to accomplish tasks and so we can actually exploited those cues to make predictions and I just hit the highlights here showing you some of the cues reusing so you can get a sense for what's going on so one thing to point out is that when you ever you do an eye tracking study using a kind of standard monitor base I tracker like this one you can get a sense of a kind of a prior model so where you where you're looking on average and so if you do that kind of prior model fitting you're just getting a Gaussian distribution literally to a bunch of fixation data from Tracker if you do that for an eye tracking study you get a big broad Gaussian like this if you do the same analysis you fit a Gaussian to the fixation data for a set up or you're using a wearable camera like this one star where wearable I tracker like this one you get a much more compact Gaussian and that's because in real life we don't keep our head really still and move our eyes a lot we do just the opposite we move our head we orient to the scene we're going to look and then our eyes make small adjustments to see what we were interested in addressing in the scene and so it's interesting to think about the difference in this paradigm this is the real life you know ninety nine percent of your life this is a kind of fake in some sense laboratory set up and they're very different actually as. Is how your attention is allocated and so one consequence of this is that you know if you want to know where someone is looking and they're wearing a camera then you can gas right there and it's not a bad gas you know it's not perfect of course but it's not bad so even a wearable camera by itself actually gives you useful information about where somebody is likely to be looking even in the absence of an eye tracker just kind of interesting and then head motion plays a role here is an example where the center Prior is not very helpful you make a big A's excursion and it turns out that can be modeled in a linear fashion so we can take the the velocity of the head in the horizontal and then predict the way the amount the gaze is going to shift away from the center prior with a simple linear models that becomes another feature that we use to predict the gaze and then the last one which is the most important is the is the the hands we can find the hands localize the hands in the image and then predict where the gaze is going to be located relative to the hands and that makes a really nice compact cluster and so it turns out that you can actually do a nice job of predicting where someone's looking by tracking their hands and following them around in the video and so we build a system that does a kind of a temporal model to describe how the gaze is being generated and then use these features I describe to predict the gaze locations and then we collected a large data set that we used to validate this is well and I'll just show you some results here's a successful case kind of some by manual activity here we're predicting the gaze location quite accurately Here's a less successful case someone is kind of doing and searching for an object and picking it up so they're doing two things at once which people are very good at doing our algorithm doesn't doesn't know that you're doing these two things which doesn't get them right. And there's and then there's a pretty good numbers as well this is some some P.R. curve will go into the details too much here. Except to say that basically if you think about the these curves then the blue curve here is actually the center prior So this is the accuracy in predicting the gay's location if you just say that you're looking at the center of that Gaussian all the time below this curve which means worst performance are actually a lot of methods that you might have seen before that are using some ideas what's called visual saliency So they're trying to analyze the image look for those places the might be brighter shiny or have some high contrast and then say you're look at those places because they're visually distinctive that's a long it's almost industry in my field people working on this visual saliency problem some of that work is of questionable value and this work actually does a very poor job of predicting where you're going to look in an actual scene where there's a task OK so the task is really important it obviously drives our behavior and without it you're not going to be able to make accurate predictions and then if you layer the hands and so on on top of that then you get the best performance so the summary here is that these cameras have real power we can deploy them they can give us accurate measurements about things that are of basic importance like where you're looking and what are you doing in the challenge then is to find ways to utilize these sensors in interesting new kind of health related applications and I'll give you an example in a second and then we've also done some work in action recognition I'm not going to really highlight belabor this too much but just to say that there is a deep underlying this as well which is which is not shocking news to any of you I'm sure and the challenge there is to utilize all the different modalities that we have so the the the raw pixel data the motion data we can compute and the hand information and we have a multi stream architecture right now that is doing the doing that that fusion and giving us the best results and these are some new numbers that are that are quite good so I want to move on now and talk about an application of this technology that's more health related but I guess I can stop first and see if there's any questions so any questions or comments before we move on yes please yeah that's an instant question so just to restate it so the question is To what extent does this build on earlier work in analyzing features and proposing ways to represent images and then what's kind of the the history of the trajectory of this kind of work I think the you know the desire to look at these wearable cameras was fairly recent in really kind of led by our group in a certain way you know the earlier works that I showed you in fact the secret is that I showed you were were produced by a method that was using kind of basic visual features of the kind that people in computer vision had invented and explored a long time ago and one specific example is what's called an improved dense trajectory which is a way of describing the motion in a video by analyzing the flow of the pixels so that either he was invented by a court of the Smiths group. And then was widely used for actual recognition and we use it we use that as well within our earlier studies but now of course we're all embracing these deep learning tools and are using features that are being learned from data and then we do like everyone else does we start with some pre-trained you know model from some standard dataset like image not some big image collection that gives the pre-training for the weight so that you have some starting point then we do fine tuning our dataset to tune it for the specific properties of our set up you know the specific properties of our first person videos so I would say it's a pretty similar pipeline to many other computer vision problems but our modality is a bit unique and then it produces these models have some unique properties that's a great question thanks for asking other questions or comments. All right so now now let me show you how this might be used OK in a specific application and so I'll take one from our work with autism this is our work we got started in two thousand and ten we were very fortunate to get a large N.S.F. grant that Georgia Tech I was the lead P.-I and we had a great team of collaborators from many different institutions and we all came together to look at this problem of autism and try to understand how sensing and modeling and machine learning could be used to address this condition in different ways and so there was a lot of great work done by this team I'm not going to have a chance to go into hardly any of it but I'll show you one finding from our group that I think is illustrated what might be done and just to give you a bit of background on autism so so autism is known to be a disorder of the brain it's a developmental disorder it begins early in development. And it has no kind of cure and no biological marker so with autism the key thing to understand is that if you want to you know you want to know anything about the condition you want to know whether a person has autism you have to observe their behavior if you want to know whether their treatment is working you have to observe their behavior if you want to give an intervention you're typically doing a kind of one on one social interaction working with a child it's kind of like playing but the adult who's who's doing it has some very specific therapeutic goals they're using this playing contacts to try to bring about over time a change in the child's behavior so this is a field that is behaviorally to find a condition his behavior we defined and all the you know all the crucial problems involved essentially one on one interactions with humans and so there's nothing inherently wrong with that that's actually how development goes you develop by working with people peers caregivers etc in your social environment so that's that's that's actually normal but the challenge is when you want to get data you know you want to measure things quantitatively So medicine the clinical medicine is driven by data the ability to measure to get a lab test done to quantify and over time to do what's called evidence based medicine where you give treat me. Based on objective data that tells you that those treatments are working and that's the challenge in this field is the data is so hard to come by because the behaviors you're talking about here are very complex and they're produced in these natural settings like playing with a child at home where it's very hard to go in and do intensive in-depth data collection and so the question then is could technology be used to provide the kinds of data we'd like to have but are difficult to obtain otherwise and I'll show you an example in a second how they can be done and just to point out that in autism in particular there really is no gold standard right now for looking at things like treating response so there's really a need to develop new ways to measure treatment response and it's especially to do it objectively based on data so autism is a complex condition and the the set of behaviors of concern is very broad there's no one thing that is important for autism if you just get that one thing right then you're then you're handling it properly but but when you when you're faced with a condition that is multifaceted then a reasonable strategy is to pick something important and work on it and then leverage that success to work on other things and that's the approach we've been following so I'll show you one result that's a now an older finding around a very specific social behavior known as eye contact so this is looking at somebody in the eyes it's a really important social behavior we use it in lots of different ways to build social relationships to coordinate conversations it's widely used in interaction and it's known from a lot of literature to be affected in autism so it's known that kids with autism make much less eye contact may even later in their development be avoided if I contacted me avoid it in an explicit way and we know that if you're getting an intervention then one of the goals would be to increase on increase a contact so that it becomes used more often and becomes more a part of the child's social set of social skills. And I did mention this I guess but but the primary sort of deficit in autism is social interaction it's the ability to build social relationships with others and all of the behaviors that are connected to social interaction including language are impaired typically in autism and so so then we picked this problem in part because it was important and in part because we know it's hard to measure so that we can do something here we don't have a lot of competition in terms of other technologies or approaches and so really the gold standard for eye contact is a set up like this you have an image like that like the following in a research setting you want to answer this question you get a bunch of undergraduates you train them to reliability and they become a sensor so they can look at the picture because they're humans they're amazing you know they look at a picture they say yes she's making eye contact and that ability to manually code videos frame by frame allows you to collect data on this condition OK so that's a research standpoint that's a scalable approach in a way because you know undergrads aren't that expensive so you can you can make this work but clearly in a in a in a clinical setting in a practical setting it's totally not scalable it makes no sense actually and it's not done so so if you're getting intervention if you're in a clinical setting then all you're going to get is a qualitative impression so the so the clinician or the or the the person providing the intervention they're going to make some of the notes maybe as they're working with you they just know you know increase today over last week slightly more they may make some some kind of qualitative notes there's really no hard data about this this social behavior which is so difficult to measure and so you can get a sense of how hard this is maybe later on in the week when you're talking with a friend of yours try to keep track while you're talking to them and gauging them try to keep keep track of how many times you make eye contact with them if you try that you'll realise how hard it is to really measure this so so our idea was let's use a technology based solution to measure this important social behavior and then see how that goes and so the the idea is actually very straightforward so we have these glasses I showed you we put them on the eggs. Amator the idea is that the examiner is our partner in this in this enterprise we can essentially load them up with sensors and the child is on the other hand left unencumbered so we're not trying to put any sense on any sense in the child that's a strategy that we adopted early on that basically is important for your approach to be scalable and usable outside the laboratory so the child is just doing what they're doing normally and then the examiner has this pair of glasses with a camera so we get these really nice images of the child you can see some examples on the right all kinds of interesting social behaviors all kinds of interesting presentations and this is actually in contrast to prior data sets that looked at I and gaze behavior that we've seen in the literature that are that are just not very interesting in terms of the social behaviors in terms of the context in which I move men and I and I gaze are being captured so there's an important element here of getting naturalistic behavior and so we have another deep architecture. That my student. Has developed it's basically about predicting eye contact so this is essentially a binary classification problem and so I think if you just think about the setting a little bit it's sort of obvious what's going on so that to literally the cameras right here in the bridge of my nose in the pair of glasses and so if you're looking at my eyes you're looking right at the camera if you're looking somewhere else in the environment you're not looking at the camera so we can make this a binary classification problem or you're looking at the camera or you're not and that becomes our definition of eye contact so the very simple once you sort of think about it in the right way and lends itself very naturally to it to a solution and the only part that's that's a little bit non-trivial is the need to handle the head pose so you can't just look at the eyes to tell I contact you didn't know how the head is oriented as well and so we do that with a two stream architecture so we have a stream up here that's predicting head pose and stream down here this predicting the binary output of I contact or not and then during training time we use a software package that does head tracking and we and we use essentially those good sequences where the head can be tracked reliably and that provides a training signal for training providing the last on this on this branch of the network so it's being trained to regress and produce the poses we use or aggression loss and then down here we use the standard cross entry last for the for the classification label and then the good thing about that is then this exposure to the head pose information gets the features downstream kind of tuned up so they're sensitive to head pose and that produces a better classifier then they're not doing that in the case where you then at testing time don't have access to the head goes so testing time we don't know they had those we can't use the software package because it fails too frequently it's not reliable enough to support our pipeline in general and so then we have just we just look at this stream and that gives us the output we're looking for and so the R.C. curve is pretty good below our below the red curve which is ours is a bunch of other previous works including some previous work of our own we're beating that now pretty handily. The green curve is what you get if you don't do this pet pose based training so that's pretty important to do and one one nice property on the right is we divide the accuracy up here in terms of kids that have autism and kids that are typically developing So we're looking at the performance of this method on the autism sub population and showing that there's essentially no no diagnostic effect so it doesn't matter where you have autism or not the model works equally well which is what you expect I mean to be actually very strange that was not the case but it's good to check it and as far as I know this is probably the first results of this kind using any kind of computer vision technology like this on kids with autism so we're really proud to have a large enough sample of kids with autism through through the collaboration with our colleagues at the Weill Cornell Medical Center Kathy Lord's group in New York we could actually do this evaluation so this is the performance of the method so when it turns green that's eye contact This is a typical child in our lab a Georgia Tech we have a little lab we can bring kids in and play with them as part of our research and then in then so you can see this boy is exhibiting quite a bit of eye contact throughout this little interaction with the Examiner This is a boy with autism at Weill Cornell in New York and he's going to make less eye contact we do get a few moments we can find his eye contact behavior and ones one is there for example and so this now is the technology is working well enough that you can sort of imagine how you might use it in a variety of settings are beginning to explore that with some collaborators here in Atlanta and other places but I think this is a nice model for the goal here which is really to sort of define some some construct some behavioral you know condition of interest like eye contact and then you know think about what would be a technology solution where you could insert some technology into that interaction that would allow you to measure the behavior automatically and hopefully densely so over time you get a very fine grain measurement but do that in such a way that it doesn't change the interaction so that the interaction is still natural it still occurs the way it normally would but now you have the sense of this giving you this really objective data and you can think about producing for a child that in the setting over time something like a growth chart where you show that eye contact over time is increasing and it's plateau hopefully it's not decreasing but by having that objective data you can really track in a really fine grained way how this behavior is being produced and then you can look at how it's being affected by things like a change in treatment or other things that impact development so that's kind of our long term goal is to have this as a kind of commodity technology let's say that could be used in a variety of settings so any any questions or comments about that yes. Yeah that's right that's a good question so the question about reactivity basically does the glasses does the interaction of the glasses change the child's behavior we actually looked at that in a study that I didn't mention So we had a project we did with with Wendy Stone's group about you and we looked at three three conditions three subject conditions so kids with autism kids that are have another another developmental delay and then kids that are typical and then we have three glasses conditions so the glasses I showed you some big kind of ordinary fat you know fashion glasses big with big frames and then no glasses and we looked at whether they're at the head of the three by three interaction table we look for any effects where any of those conditions affected the amount of eye contact more than the other where in that study eye contact was done by a bio human coding just coding the video to see if I contacted place so we didn't find any effects in that study is that let us conclude that the glasses are not having a source of reactivity and it makes sense because these are kids that are pretty young eighteen to thirty six months and they don't see the examiner outside of the clinical current context they really don't know how they would look in daily life and kids of that age habituate pretty quickly to these kinds of things you can imagine older kids might notice something more readily and they might they might pursue every on it more. But in general what we would expect is the these would be things you'd use for the long haul you know you didn't use them and keep using them for quite a while and then kids will typically habituate to things that are always there that no longer interesting once the novelty kind of wears off so so I'm optimistic about this being a usable technology even in homes where mom or dad could you know let's put the glasses on for our play session Johnny and then you know there you go and after a while Johnny is like yeah OK those are weird but you know that's not going to affect my behavior so I think there's reason to think we can we can get away with that but it's a great question there's no you should always ask these questions that reactivity whenever we're introducing some new technology into a into a setting for the first time you know anything else OK so let's move on so that I want to talk a bit about method so. So we talk about sensors and I think there's other other challenges we face as well we have more to do with the nature of the data that gets produced in these settings so we tend to have a lot of interesting machine learning problems in this domain. A lot of the signals are being produced in time so that so you you rarely in the case we have sort of ID data which has been the bulk of the experience in machine learning so far has been with ID data we rarely have that actually and then not only are we interested in modern temporal data but we're interested in making predictions we'd like to predict before there is a risk for some adverse outcome we'd like to know that's going to happen we'd like to take take in action actually intervene if we can in time to affect outcomes so not only are we modeling from data temporally we're also trying to build predictive models and then not only we didn't pick up modeling we're actually trying to do interventions we like to close the loop and actually use the data to drive how we interact with participants and I want to show you a little a little example of that in some machine learning work we've done to address some of the challenges that arise in that setting so so it's very common technology for mobile health to use these smart phones and have the ability to ask participants to answer questions throughout the day so this is an example of some questions you might get asked in the context of a research study and then you might also do intervention you might suggest things the person might try to do you might get them to to use an app to to sort of practice some particular skill of they're trying to develop So this is a platform essentially for both collecting data and for providing feedback and interventions to participants. And so the basic kind of modeling you like to do is the following you'd like to have some you like there's some risk that you're concerned about this could be the risk of relapse to use in the case of participants that are in a smoking cessation a town to they're trying to quit some other kind of drug use and you're essentially worried about their their their likelihood of relapse and so you're trying to you know provide them with some tools so that they can they can you know change to change the course of events before they relapse back to use and so to really support that kind of intervention you'd like to have some risk measures you'd like to have some variable in causes risk you'd like to track that over time showing here for every hour and the like to make predictions based on the risk about some future lapse probabilities of the likelihood of lapsing three or four hours in the future so this is a future prediction based on current data and the need for future prediction is the fact that you want to intervene so you'd like to know that you know four hours from now the way things are going you're going to be at risk let's do something now before the situation gets out of hand to try to to try to prevent prevent or hold back that outcome so this is really a predictive modeling problem but it's challenging because of the need to make these long term predictions and then we'd like to do this in real time you know using data collected from the phone from the phone for example or from other wearable sensors and so this just shows I said intervene if the risk is high and bring the state participant to a lower state of risk and then and then decrease the likelihood of this of these adverse outcomes and so one of the tools that's a very classical tool in machine learning but is actually very relevant for the relevant for this kind of study is the kind of latent variable models so people in psychology that are dealing with real at relapse and dependency they tend to think in terms of these these behavioral constructs like risk and you know whether someone is engaged with the with the intervention or not and other constructs like craving for example so there's a long list of constructs people have come up with and it's very attractive if you can sort of measure those constructs in some way and later variable models are attractive way to do that every state of interest can be models a late variable and then you can do your modeling work you can connect those latent States to data that's available to you whether it's whether it's answers in an E M A contact or you just ask answering questions or whether it's even more complex things like on body sensors producing real time data that might also speak to some of these states of risk and so and so there are some some tools that exist for doing this but there are some unique challenges in this domain that really need to be addressed and so one unique challenge is the temporal structure of the data so a lot of the data that we care about in these mobile health settings can really be thought of as Event data so you've got Often the often you've got a continuous sensing component you're able to wear you know an accelerometer or a gyro you're getting continuous data about movement for example or activity but a lot of times that data doesn't really speak to the things you care about and then what happens is there are these moments when something happens which is really important for example you give someone an E M A and then you know it's a random prompt or they decide to provide some input and then that that data when they provide their their questions their answers to the questions that's really valuable data but it happens at one point just an event is not happening all the time or somebody gets into a state of high stress and we know that stress is. Possible risk factor for relapse those high stress events don't happen all the time they're not happening continuously throughout your life they're really just a significant event that happens once it happens you need to take it into account and then decide how to react and go forward and so there's a real need for models that handle this kind of event data where the important things are not happening on a clock but really happening at any time based on these continuously evolving processes and so there's some tools that are out there for handling this this kind of data really haven't been utilized very much by the community so far and we've been working in my lab to kind of make these tools more usable and develop better better algorithms and software for working with them and so one example of this Tools is called the continuous time hidden market models so H M M's in general are a good tool for building lean state models of temporal data it's kind of the go to tool for that particular task and then for lots of applications in speech an activity recognition and so on we have these classical H M M's that are essentially working with regularly sample data where there's a clock and every time the clock ticks you get a new frame of data and then and then a key property of these models is all the state transitions occur at these clock ticks whenever there's a clock tick there's a new chance for a state transition but Event data is sort of fundamentally different in the sense that in that case you get observations infrequently in time whenever one of these events occurs whenever the subject feels out in the in May you get an observation the latest date is to actually changing continuously in time as well and in fact the latent state can transition sort of arbitrary times before you get an observation so you've got a fundamentally more challenging modeling problem here with Alain say this changing in ways that are that are not captured necessarily by the observations and so then without getting into all the technical details one so the basic technical challenge here is that whenever you're fitting these models using E.-M. for example you've got to account for these unobserved transitions through some kind of expectation you know maximisation process and that process is challenging how patiently over here because the transitions here could be very very many it could be an infinite number actually so that creates some some compositional challenges as far as working with this type of model and fitting them to data and so I'm not going to get into all the gory details but we have some paper at nips a couple of years ago that showed how to do this very efficiently so essentially studied the the capital structure of this learning problem and found some very efficient algorithms for updating the parameters of this model given data. And then we've done some work since then on using this to make predictions of future risk and so here the idea is the latent states are going to play the role of a feature you can think of this as a kind of feature that tells us something about the future state of risk and we'd like to then use that feature to make predictions of risk and compare that to some more standard in the behavioral literature some more standard techniques for doing risk prediction and so this is again not going at all the details here the paper we had a nice e-mail last year this is showing our predictive model making predictions of risk using our lane state approach versus other more standard either S.T.M. or logistic risk predictions and we're doing a better job than these other methods of maximizing the accuracy of the risk prediction so this is kind of an interesting problem and we have some some nice solution and we're we sent a paper dice e-mail that's got in the refinement of this going to use this now and some other studies. And so just to give you a sense of what the model is saying so this is kind of like a sanity check beyond the sort of I'm in America accuracy just asking the question when the models in different states how do the probabilities behave this is the kind of thing that you're a behavioral science collaborative with wants and want to know and they want it to make sense and so luckily for us in this setting it makes a lot of sense so when the participant is judged by our by our analytic model as being in a state of higher risk and when they are engaging with the intervention is low we predict this probability of lapse the thirty minute last prediction here to be quite high when they're in a state of low risk and the engagement is low we predict a much lower probability so if it was it was different different from this it would be shocking like it would mean that it's not working at all but in this domain you know it's very important these models be interpretable and so you have to build produce these commonsense kind of of ascertainment that then you know hopefully align with your with your collaborators clinical sort of perception insight and so in this case we've got something that makes sense to them are any questions about this this modeling piece. OK So let's go forward so so I want to just briefly end by talking a few minutes about platform so so so we got another center Grant recently from the and I was just looking at this task of building kind of platforms for behavioral interventions in the field so it's really looking at this is our key aim right here so we're creating some open source software it's going to allow people to carry into the field a variety of wearable sensors so you can think about you know if it bits and other kinds of you know heart heart monitoring devices various things that might measure aspects of physiology as well as apps on the phone and I produce the Amaze for example and then we're going to have the software that lets us fuse fuse this data together so synchronize it in time and organize it on body and then from that data derive a variety of markers that speak to different states of risk and different behaviors health related behaviors and then will demonstrate the ability to based on the sensory data to trigger a behavioral interventions so the device is monitoring you it thinks that they are in a state of risk and then it triggers an intervention on the phone that then allows the participant to you know modify their behavior in some way and then demonstrate that this can be done in the field so demonstrate that it can be done in daily life with you know real people you know in a study out in some major city like Chicago for example so this is beginning to move towards a technology that could actually be feasible and so it will have a great team a lot of different universities all involved different aspects of this and we're building our platform that I mentioned So there's a platform for collecting data on the phone storing it in the cloud doing analytics both on the phone and in the cloud all this is available the way it's all open source you can grab it and play around with it if you're interested in this sort of thing and on moving towards having kind of an ecosystem of markers and we're doing a study right now in Chicago with smokers looking at smoking lapse that's our first kind of target population that we're looking at in the smoking cessation context and we should have our first results coming out of that pretty soon in this case what they're doing with smokers is really looking at their stress and then providing them with some tools some ways to kind of manage their stress on the phone that they can use and we can kind of trigger those tools whenever it looks like the stress is getting getting too high so just to wrap things up I want to thank all my collaborators both at your attack and that are collaborating sites on our sponsors as well and then just mention the main points so so I think there's this important dimension of Health which is not present actually in clinical data not really you know I mean a major focus within our current health care system in some sense but it's extremely important it really is the driver for most of the cost of our healthcare system right now and it has to be addressed through a combination of new analytics and new kinds of on body sensing that let us understand how to behaviors as they're being produced and these can be applied both in development in mental health things like autism and also in adult populations that have sort of chronic conditions and I think there's a great opportunity to develop these technologies and push them forward and then with bride of cloud readers get them in to use and see how they might change health outcomes and we're doing that in the context of R. and B. two K. center producing software that supports those applications All right thank you for your time. Yes dear. That's a good question so where does the word of the main challenges lie you know do we need more sensors we need more analytics and I think I really think that this is kind of a new area in every sense of the word that really there's been very little work so far that looked at this this kind of behavioral measurement task the most mature by far is the mobile health community where people with accelerometers and gyros been looking at these problems for quite some time but even that even that field is relatively mature you know if you look at how long people been doing it and how complex the challenges are so so we have sort of have this overall kind of lack of maturity which means that it's exciting field because there's many ways have impact and immediately do something new but you know to get back to your question I think it really is a combination of things so I think you know there are there are dimensions of your sensors that are always going to be a problem like battery life for example is never going to be as much as you would want but there are other dimensions that I think we could fix you know with just doing some of some more research so we actually have a chance to you know to improve things but even using off the shelf sensors off the shelf you know cameras off the shelf accelerometers and gyros the underlying data modeling and analytics are still they still need a lot of work so we still have you know not the right tools essentially for really getting a value from this kind of data. And I think just quantity of data is a challenge as well so so those of us that are connected to studies you know have access to relatively large data sets still the size is small compared to like these large clinical data sets where your and is it can be extremely large but because of the nature of mobile sensing you know a participant in the field can easily produce a terabyte of data you know pretty quickly just one person because these these are always on in some sense so it's a very interesting kind of you know big data task it tends to be smaller ends but huge amounts of data per participant basically and then a lot of temporal structure that needs to be analyzed so I don't think there's a simple you know answer I think it really does need progress on all fronts and I think you know you can kind of decide like where do you want to have an impact do you want to work on a new sense or do you want to work on some new machine learning you know algorithm and there's opportunities in all these areas. My pleasure.