But I actually got my bachelor's degree attack so it's kind of a homecoming for me coming back here. Right so I've given this talk a lot of times before which sometimes means that the talk is going to be boring but there are two two reasons I hope that's not going to happen to day one I don't really have a line so I'm going to be a sort of on my toes trying to get get get through this as cognitive cognizant as possible and to because if a recent like security patch most browsers I for some reason have to choose between being able to show you video or being able to have my speaker notes and I decided to allow you to see the videos so I'm going to be a sort of flying flying flying blind through this presentation so awfully that will result in a better presentation will say. So a lot of my work is oriented around this question of how can we design systems that encourage better cyber security behaviors right and I think that's an important question because of how foundational security is to the computing today right without it we wouldn't use things like email we wouldn't be uploading ensuring pictures of ourselves and our loved ones online we. We wouldn't be managing our finances online the world as we know it would be a little bit less convenient because if not right. But because security is so foundational to computing right now and because there are still touch points that end users have to sort of interact with in order to ensure their own personal security it's really sort of no surprise that. Security you know cybercrime remains a massive industry today or at the exploitation of weak security behaviors remains a massive enterprise. You know one I.B.M. estimate puts it at four hundred forty five billion dollars per year in damages with every day professional cyber crime unit suiting up going to work for their regular nine to five job and producing upwards of two hundred fifty thousand new pieces of malware that target and use it every day right so that's a big. Malware sort of like the dire wolf which you might have heard of which sends unsuspecting users e-mails that look like they've come from their friends or loved ones get some to install key loggers where they can sort of steal their bank credentials take money out of their account and then deed off their home servers of the end users don't know what hit them right. Sure I thought this was supposed to. On the desk. Like one. Hello that's too much. But the but the kicker is that much of this malware that exists on the market today would be totally hamstrung if end users used so recommended security systems and behaviors that are already sort of out there right like things like easing two factor authentication for important things like keeping their software up to date things like using a password manager but typically when we think about. Security in the end user context we don't necessarily always think of ideal behavior threat we instead think about things like people sharing their passwords with each other or propping up an electronically locked doors of garbage cans right and I say this not to cast doubt on cast judgment on those users actually think oftentimes it makes sense what they're doing but it does speak to this disconnect between how security systems are designed today and how end users actually want to utilize those systems right. And you know right now a cyber security breach can compromise your personal data can compromise your finances it can compromise a lot of scary things but in the short term in the near term future it's going to be a lot more than that right there's this increasing physical as ation of computing computing pervades all around us it's a pretty soon that breach is not just going to be access to your finances going to access to your front door it's going to be access to your smart cards can be access to medical prostheses of the little nano machines that are circulating around your bloodstream and uploading data to your doctor and so I think it's more important than ever that we start to get sort of end user facing security Right right because it's going to be too late ten years from now when somebody's on prosthetic arm sort of show them to that so we need to start creating the systems that allow end users to want to engage with their security and so be crushingly tolerate it and so. To start this line of work I started by just asking people some questions that I didn't I did some interviews I asked people questions like what what makes people decide to use a pin on their phone what makes them decide to change their password or more generally what is the driver. Force behind a security decision and what might we be messy so I want to read you a few quotes when I first had a smartphone I didn't have a code but then I started using one because everyone around me had a code so I kind of felt a group pressure to also use a code. One of my boys wanted to use my phone for something so I gave them my pass code and not that I have anything that I didn't care for them to see or anything but after they did that it changed again. My friends have a lot of different accounts the same as me but they didn't get into any trouble so I think maybe it'll not be dangerous to reuse all my passwords online right did anybody notice a trend. So a lot of it is social and in fact security behaviors like any human behavior is largely driven by social trends as illustrated by this candid camera clip where the sort of the people facing away from the camera confederates and the protagonist will eventually turn around to face the wrong way himself because everybody else around him is doing the same thing and so this is a real this is a real thing that we that we as people do whether we want to or not like we look to others for cues on how to act and in fact out of the over one hundred security behaviors that came up during this interview during these interviews that approximately fifty percent of them were socially driven right and more recently I did a survey of over a thousand users asking them but recent security and privacy behavior changes that they made and they reported approximately thirty nine percent of over about two thousand behaviors that I collected through this one thousand person sample was sort of the result of a social trigger at least partially right so the social component is clearly very important traditionally when we look at cyber security we tend to primarily think of it as a technology or an algorithm problem in a lot of ways right so we try to improve crypto we try to create systems that formally verify the correctness of code and create trustworthy hardware all of which is of course incredibly important because without it we'd be hosed but it doesn't really sort of address the social dimension. Even usable security which expands our frontier a little bit and consider security a problem at the user interaction level primarily focuses on improving interaction for the individual right so to make authentication faster notifications more understandable and things like that but if my interview and my subsequent survey was any indication absent knowledge of how sort of security and social behaviors interact we have little hope of doing much better than where we are today which is where people sort of begrudgingly tolerate security and try to avoid it at all costs if they if they had all that right. Now fortunately we don't have to start from ground zero there's a vast body of social cycle that are sure that can guide our understanding through those social psychologists have for decades been using simple social principles of human behavior to get people to do things like reuse towels in hotels or lower their energy consumption or look up at the sky even though there's nothing there. So my work sort of tries to bridge this gap between social psychology and cyber security because currently there hasn't been too much cross-pollination between those two fields right and I do this in two ways one I sort of drawn data science methods to empirically model human security behaviors especially as they interact with other people and then I start. Draw on my knowledge for me it's you and general C.S. and ubiquitous computing and things like that in order to invent. Novel and user facing security tools so in this presentation I'm going to present you present three different projects that illustrate my my research process a Within this context of social cybersecurity and by the end of it all I hope to have convinced you of three things which is one social influence strongly affect security behaviors and design of security to affect its potential for social spread social influence can be used to improve the awareness and adoption of existing security systems and behaviors and there's a larger opportunity to reshape the future of end user facing security systems to by designing them to be more social right so let's start with that first one. Yeah I have some very clever hints of that but I'm going to get in the talk. So we get to that but I have some confidence that certainly I don't think this is going to be like this is the only piece that we were missing before security spreads like wildfire but I think it's an important piece that we haven't considered yet and so I'm going to provide first some empirical evidence to support that hypothesis and then I'm going to share one way we can address it. So the first step in making security more socialist a sort of understand how social influence affect security in the first place right so I wanted to answer a very simple motivating question or to get get get this right which is what if any effect of social influence have on the adoption of security to will stay right and I was fortunate enough to partner with Facebook to answer this question I analyze how the adoption of three optional use security tools is affected by friends use of the same tools for about one point five million Facebook users in the three tools that I studied were logging medications with center notification in case you ever. Sort of logged in from a suspicious context like maybe a country never been to before. Logging approvals which is Facebook's version of two factor authentication and then finally trusted contacts which if the first two are sort of standard security tools that you would expect Facebook to provide trusted contacts a little bit different in that it was more social right it allowed you to specify three to five your friends or could vouch for your identity in case you ever lost access to your account. I didn't just partially because there was much less data available for that but I did the study back in twenty fourteen. So did so I started by just collecting data right. They started by strenuously sampling seven hundred fifty thousand users who newly adopted one of the after mentioned security tools one of those three and then I also randomly sampled seven hundred fifty thousand subset of people who had never adopted one of those After mention security tools right and I collected the state over twelve days balanced across all three tools. So with that data in hand to answer that motivating question I talked about before it really came down to can we distinguish between who is a user and who is a user not based on the presence of social influence in their social and their social graph it and I answer that question using barring a method from the social network analysis literature called Match frequency sampling analysis which is a key feature of helping us distinguish between I'm awfully in social influence and observational data I'm not going to go too much into that today because you know just going to try to finish by eleven forty five or forty five but if you want to talk more about that I'm happy to talk more about that later but here's a rough overview of how math professor sampling works so first you pick a variable that's a proxy for social influence in our case it's pretty easy it's a percent of your friends who use that particular to write then you cut that variable and continuous space to discretized it such that you evenly sort of distribute the population across that space and then if you do that you can do all sorts of neat things like specify exposure levels so that if you have a user who for example has three percent of her friends use log notifications you can say things like she is exposed one the first exposure level but she's not exposed to two three and four because she doesn't have at least seven point three percent of her friends like officials Similarly if you have a user as eleven percent of her friends use logging of cations you can say she's exposed if you want me to an E three but not eighty four right. And then once you have these exposure levels what you do is you pair people who are similar for each of these exposure levels you pair people who are similar where one is exposed at that level and the other is not right so for example you create two different sets of people and you might put Alice in the X.. David in the unexposed because Alice does have at least five percent of her friends who use a lot of notification But David does not but otherwise Alice and David are very very similar right like maybe they're both around the same age want the same university have the same level of activity on Facebook have similar sorts of friends similar sort of political affiliations all sort of things like that. So in theory really the only thing that separates Alice and David is the fact that Alice is with the variables that we have is the fact that Alice has five percent of her friends use log notifications and it's not similar for Bob and Eve So Mr Carlson right and so at the end of this at the end of the stage for each exposure level you have these two sets of people who are paired such that they are very similar individuals but the thing that distinguishes them based on the data that we have is that one is exposed a certain level to friends who have used that particular security tool that we're studying right and then you can calculate the rough effect of social influence on the adoption of that particular tool at that particular exposure level by subtracting the adoption rate of people from the unexposed group from people from the exposure right and so here's what I would look like if you had a graph that looked like this where that X. axis was exposure the day after mentioned discrete levels of exposure to frenzies a particular security tool and the Y. axis is the social influence effect that I just mentioned expose minus on expose adoption social influence had absolutely no effect it expect to see this flat line and zero which means that the difference in the adoption rate between those two groups at all levels of exposure is negligible that there's no difference between the adoption rates. And if social influence had an effect but you didn't really expect the number of your friends who use that particular security tool to effect that effect then you might see the sort of flat line at twenty five percent of whatever whatever value you chose which would suggest that social influence people who are exposed are consistently more likely to adopt that particular security told in people who are not exposed but it doesn't. If you have like ten friends or hundred friends more likely what you would expect to see something like this which is social influence has an increasingly potent effect as you have more and more friends he is that particular security to write back to the elevator example if one hundred people are facing the other way you'd be more likely to face the other way yourself than if only like two people were facing the other way it something like that so this is what you would expect so we actually see well starting with the curve for trusted contacts we see something very similar to what we would expect right social influence has a positive effect and has an increasingly positive effect as you are more and more exposed to friends who use that particular security tool and that's great that's some of the first Imperial evidence that we have that social influence does seem to influence people's decisions to make security and privacy decisions right so that's awesome we never had that sort of empirical validation for you know when you start plotting the curves for logging approvals and logging notifications you see something a lot more sort of weird that makes the story a little bit more nuance right that area below zero where where the graph has been colored red that's where social influence actually has a negative effect that's where if you are an exposed to friends at a certain level who use that particular security tool you are more likely to use it then if you are exposed at that level to friends or use a particular security detail and that's really weird right that's not what you would expect at all typically So what's going on there well after talking to some social scientists friends of mine and marketers there turns out it turns out that there is a term for this in marketing it's called disaffiliate and it happens when uncool user groups started using a product right so for example when teenagers when parents are using Facebook teenagers kind of start flocking off the platform in droves right now I want you to think about the first person in your personal social network who would use two factor authentication on Facebook the day it came out. Somebody in mind. Does it look like this guy. So for a lot of non tech savvy people they actually do. You feel that the people who are particularly concerned about security who are overzealous about security or who you know will use two factor authentication the day it comes out or people who are a little bit paranoid or nutty or they're experts who have to do it for their job but what do I have to hide right things things of that nature and this could explain why you know despite decades of improvements the usability of security and privacy tools you still never really see like a wildfire effect of security defeating through the populace right and it gets never something that spreads too far outside of that early adopter set unless there's a huge marketing campaign or something behind it but there are two pieces of good news and the first is that there's a positive mean effective exposure at all those lines go up into the right and that means that as you are more and more exposed the effect of social influence is becoming more and more positive is that for regular security tools like logging notifications and logging approvals it never flips to positive until you have like so many friends use that particular security tool for it but for and the second is a good news is the design of a security tool seems to affect its potential for social spread because trusted contacts again did not have that negative effect of social influence right so after talking to the designers in the use of trusted contacts we figured out there were these three design dimensions that seem to sort of differentiate it one is observability when you use trusted contacts you have to specify three to five your friends who can be your trusted contacts and they get opinion mediately that not only are you using this feature but you've been specified as one of their trusted contacts and so that might make you think maybe I should use this too with regular security tools you make a you make a change and that's sort of you know your own business. Inclusiveness is the second sort of design to mention you're including your friends in the process of providing security for yourself and potentially stewardship is the third dimension right which is your friends can act on your behalf many people may not be concerned about their own security but if you tell them like all should your friends use two factor authentication everybody's like yeah they probably should right so there's this unharnessed social energy that can be sort of. You can account for in this stewardship design dimension but that is typically not all right so social influence strongly affect security behaviors in the design of security to affect the potential for social spread so next I want to ask a simple question which is why knowing this can we increase the awareness in adoption of security tools by making them more social right and to answer that I was able to run sort of a randomized controlled experiment so that was observational data that you saw in the previous study and then I was fortunate enough with that observational data in hand to be able to run an actual experiment to actually test the hypothesis right so I ran a controlled randomized experiment with fifty thousand Facebook users which was piggybacked on top of their annual security awareness campaign to promote this optional use security tools like the one I just meant like the ones I just mentioned So the premise of the experiment was very simple we just show users a little notification on top of their news feed which was like you can use additional security tools if you want right the vanilla notification that we use is the control condition which is what they always tended to use before our study was just this sort of you can use security settings to protect your account to make sure it can be recovered if you ever lose access and there's a call to action button that would take them to a model that would allow them to enable those additional security tools if they wanted so we added some social observability to this or it so we tested a variety of different configurations of social observable observability I'm not going to go through all of them but they varied in how that social influence was sort of presented in the in the announcement text right so the raw number condition was just like a straight up number of your friends who use that particular security tool and then you can also protect your account to something as Vegas just some of your friends are using extra security settings you can also protect your account if you want and so we tested all of these a social experimental conditions with the control condition being that but no one that I talked about right before this so each person in the fifty thousand person sample was randomly assigned to one of these conditions which resulted in sixty two hundred fifty participants pre-condition and the experiment. And for about three days right so in in order to measure the efficacy of our different notifications we measured the click through rate on those announcements the seven day adoptions of one of the promoted security tools after a user saw that announcement as well as the five month adoptions to see if like you know maybe there's not an instantaneous effect but maybe there's a longer term effect or something of that nature So here here's sort of how how those numbers broke down in aggregate ninety three percent of our participants logged in and saw the announcement in that three day period about thirteen percent of them clicked on one of the announcements about four percent of them adopted one of the promoted tools within seven days and about ten percent within five months it and here's what those numbers look like broken down across conditions now there's two main things I want you to take away from this particular graph The first is that every single social condition outperformed control in click through rate red so there was about a thirty six improved percent improvement between the raw number and the control condition and click through rate which is great I mean I really just added one sentence in front of that particular notification and I don't know how many if you have done A B. tests on line with like a large numbers of users but you never get that kind of a result from changing a little copy typically. The second important thing though is that also the best two conditions actually outperformed the control significantly. In adoptions as well so it results in about ten percent more of the users actually adopting one of the promoted security tools as well just from changing that little copy in front of the notification right which was Rich was very promising. So observability did significantly increase click through rate for all conditions and the best and for the best conditions even significantly increase adoptions and remember when they clicked on their call to action button the social proof was it was gone right like there was nothing different when they actually had to make the decision it was just that one notification. So these results of change or Facebook approaches the promotion of the security tools and it's also sort of change the types of security to have that there thing. By developing for the future as well so that the that I found to be particularly exciting at the end of that work right so next you know it's possibly easy to increase the observability of some security some security told by like those notifications or something like that but it's much harder to sort of make existing security tools like logging approvals more cooperative or more stewarded or something like that right because they're fundamentally not designed for those things so next I want to want to ask the question posed them sort of more longer term research question of can we redesign end user facing security to be more observable inclusive and stewarded from the ground up and like I'm not going to answer that research question necessarily in this particular talk but I'm going to give you an example of what I mean by employing some of these design dimensions of the design of music and user facing security tools for it and for that yeah. Could you explain. Why. You're. Wondering. Or. So convinced that sort of like a broad measure that I do care about but it's hard to measure so proxies for convenience I certainly try to get to but I never asked the specific question is this more convenient. Right so I don't control for the level of convenience in the tool but you could certainly say that logon notifications is more convenient than two factor authentication. Etc etc So I don't actually control for it though certainly I've been in. I want to control for but it's really hard statistically to control for a measure like convenience. So. But certainly I care about convenience and in fact you know making things more social in many ways making things more convenient because if you had a system that for example assumed and accounted for the fact that probably your going to share your password with your family and that would be more convenient in terms of its usage without violating the security assumptions the system right. Which is my own way of settling myself to this to this particular system which is thumbprint. So I want to I want to present one system that I try to redesign to be a little bit more social from the ground up right and this is a a local group authentication system that uses shared secret knocks so in sort of its essence thumbprint allows groups if users to. Sort of create a shared secret knock using any knocking pattern they want within three seconds as well as any item they want to knock and as you can see from the video users initially just register a secret knock into the system and later if they want to access whatever resource they're trying to protect they can reenter that secret not exactly the way they entered it and then they'll get access but if they don't then they don't get access for it simple enough but I want to emphasize that the cool part about thumbprint is not the mode ality of interaction although like I do like that modality as well the cool part about thumbprint is that it allows groups of individuals to have simple authentication into a shared resource without requiring those individuals to have their own secrets but still being able to uniquely identify them right so that's the cool part about thumbprint that's not necessarily the knocking interface right and this might be a useful feature for a wide spectrum of small local groups to collectively own and share group shared resources like families who have sort of game consuls and smart appliances in their house that they need to sort of collectively mantic security decisions for work teams that share things like kitchenettes and meeting rooms they don't want other teams come. And stealing the donuts numb classrooms that have storage equipment and things like that I mean the whole plotline of breaking bad would happen if that thumbprint and that chemistry lab right because then they know exactly who took those beakers out so for these sorts of this wide scale of wide spectrum a small local groups who you know are just sharing locally shared resources and want some sort of authentication for them you don't need like hard core authentication for these Europe threat in fact hardcore authentication will probably sort of result in a disaffiliate effect we talked about before and said what you need is something usable that understands the social assumptions of that particular context so for those groups device of authentication that requires individual secrets is often inappropriate because it creates unnecessary social friction in the sharing of devices that are supposed to be collectively Andrzej like are you supposed to give you know your your children your password I mean it doesn't matter what you're supposed to do you're probably not going to you know make your four year old daughter create her own password into this into this i Pad or something like that and it also creates this structure where security is only as strong as the weakest link right the security of the entire group is only as strong as the whatever authentication secret the person with the weakest Security decided to sort of implement for their secret. But neither is shared authentication always appropriate for it like things like shared pens or shared passwords that's because it can preclude things like personalization if you just had one password in the system can differentiate between who was the individual user in the group using it you can have things like to or to access control you can have things like personalization you can have parental control the content audit logs it because it just be somebody from the group took something out sometime so some point is designed to sort of be in the in between those two things designed to have inclusive any and identify ability that So here's a broad overview of how it works first users enter their secret knocks on a sensor surface with an accelerometer and a microphone on the sun on the sensor streams we extract time in frequency domain features that sort of represent that sense of stream in an interesting way. Then we so. Use a semi-circle supervise machine learning approach to learn individual knock expressions and then finally we regulate access control through some sort of endpoint it could be like a Bluetooth connected smart lock or he saw it as he's on the video it could be a locked up or something like that. So let's let's break down the individual steps a thumb print takes to learn the secrets read first the registration of course each group member in order for thumb print to learn each group members unique expression of the fine print you need some registration data so each group member enters their secret knock up to ten times right or N. times I used ten just because you know any and much more than that and it becomes very unusable but it turns out that ten is more than enough. And then from each of those registration sensor streams we extracted a variety of accelerometer and acoustic features in the time and frequency demands are not going to go into them happy to talk about them or if you want. Them so we have these hundreds of features that represent an individual's secret knocks and then we but we have like you know like maybe five group members and like ten registration attempts it's only fifty training training back tears and so you don't want hundreds of features for fifty training Rector's because then you're going to overfit to that particular that particular unique nuance of how they entered it at that time right so the first thing we do is we're unsupervised feature selection we used a specific algorithm called correlation feature selection which selects a subset of feature that best very parsimonious subset of features that best sort of differentiates between individual group members right so there's no training works after that process like let's say you have parsimony a subset of any calls to the features and then you have these sort of thirty training and you know whatever twenty twenty training attempts between these three different users so how do you go from having this char individual learned expression of the thumbprint right so we start with a semi supervised cost clustering approach where thumbprint learns clusters of the data expressed in feature space for each individual user separately and then sort of the wider base around that is sort of the group shared. Thumbprint. For detecting potential new users or something like that but the interesting part about tempering is that it can also learn multiple expressions at the level of the individual and that helps you know in case you're in case you have a different way of knocking if you're drunk or something like that thumbprint can figure that out if it has enough training. Or like if you just happen to change how you and your thumb print over time right it can learn and adapt. And then authentication Once you have those individually learned expressions and feature space becomes pretty simple it's comparing the unlabeled attempted feature space to the learned cluster centroids of the individual shared expressions right if it's anywhere beyond sort of the combination of those clusters you know it's not even the group shared thumbprint if it's in between them you have some uncertainty and either. It's an attacker who's trying to sort of replicate the group shared thumbprint or it's a new user or something like that. OK So that's roughly a thumbprint works but now I want to answer the questions can individual group members entering the same thumbprint be distinguished that's one of the value propositions that can people enter the thumbprints consistently after time separated sessions people often can knock this way one day but can they do it the same issue way the next day and then finally can casual but motivated adversaries be detected not creating thumbprint for like foolproof security but it is an authentication system so you don't want people to tap whatever they want and get access to the system right so we did that by running a user study so we were treated three groups of five participate in our participants to participate in a two day study participants in each group watch recordings of group specific thumbprint and then were asked to replicate that ten times together so well ten times individually but they all sort of agreed upon the thumbprint that they were going to that they were going to use. And then twenty four hours later participants came back and they were asked to replicate their own group thumbprint from memory no other member of the group was around it was just them they were not sure recording the clip or anything they just had to do it themselves and then they were asked to break other people other groups thumbprints as one of four different. Series so the token only adversary was given the right thing to tap but had no other information the sound only adversary had an audio recording of one of the group members from the other group entering their thumbprint just an audio file that to reverse engineer what to tap and how to tap the video plus wrong token adversary was given an over the shoulder perspective of one of the group members entering their entering their thumbprint but was not allowed to use the correct token and in front of the video plus correct took an adversary was given all of those benefits right and so we want to see how effective are these four different adversaries at cracking existing ones so we transform print for each group on registration data collected on group one on day one and then we tested the data collected on day two both the authentic and the adversarial reputations and so this is a graph that shows the main feature vector difference of an unlevel death an occasion attempt in feature space against the closest sort of learned expression that thumbprint had on the training data so the correct member over there a lower is better over here it means that they are closer in feature space to one of the learned expressions so you can see all of the different the mean difference for all of the different adversaries as well as the correct member as well as sort of classifying the wrong user off the same group as the other user right and so we can see that the correct member is far lower in feature in feature vector difference even using data from the following day right so this is twenty four hours later if I use data from the same day it's even lower than that. And if we sort of set an authentication threshold at approximately above the ninety five percent confidence interval for the correct number we can get an equal error rate of approximately twelve percent with less than five percent misidentification which again not foolproof security but it's about as strong as any other behavioral biometric out there but it adds this additional dimension of being socially inclusive in some way and so that that's exciting So let's go back to this the three questions can individual group members entering the same thumbprint. Distinguished Yes Can people enter the thumbprints consistently again it seems like yes because we were able to identify them the next day and then how can casual with motivated adversaries be detected the answer to that is mostly but I also wanted to keep in mind that I only had ten training data points per individual if I had a lot more training data points per individual you can imagine that the performance of thumb print would get even better it's already reasonable enough for the use case it was designed for but it could be even better if we had more training data right. Aren't so thumbprint is an example of the opportunity to sort of reshape the future of end user facing security to be more social by design right we can still have our security goals but we can enhance it to be more socially socially intelligent understand social context but remember I started this talk with this question of how can we design systems that encourage better cybersecurity behaviors. And making security more social intelligence shows a lot of promise here but with of really just sort of the tip of the iceberg there is so much left to do more generally and in the future one direction of my research lab is going to be to create a suite of security systems that have a better Imperial understanding of human social behavior and by that I mean emphasizing this is the dimensions of observability inclusively and stewardship So what I mean by that by observability I mean how can we make it easier for people to observe and emulate good security behaviors using an offline analog when you're in the physical world if an attacker comes at you with you know with the with a weapon of some sort you can see that throughout and you can see what everybody else around you is doing to respond to that threat and so you can choose a strategy that works for you right run away hide under a desk run like face first into the knife I don't now whatever strategy works for you right but in the in the in the cyber world we have none of those cues we don't know what threats are potentially relevant to us and we don't know what anybody else is doing to protect themselves against those threats so it's not really much of a surprise surprise or people who don't think about security and privacy on a day to day basis don't really respond to security threats that quickly or that. Vigilantly right so how can we how can we change that by inclusively I mean answering this question of how can we design additive security systems that make Group security a psalm instead of a mini function Now again if you want to use an offline analog over here if you're walking down a dark alley way alone at night versus with a friend you probably feel safer if you're with a friend right and that's because your strength aggregates are at the very least you know your friend isn't detracting from your own strength right but think about what it's like in the cyber world if you share or if you share a file that you want to keep secret with a friend with poor security behaviors effectively declassify that information to whatever level of security your friend has read so in the virtual world security men function in the physical world we sort of build entire communities in societies around the simple fact that we are safer when we are together than when we are part right so how can we how can we incorporate some of that thinking into the virtual world as well and by stewardship I mean how can we design systems that allow people to act on their concern for their security of their loved ones came up multiple times in many if not many of my studies and other people study is that people have some sense of accountability for the security and privacy of their friends and loved ones especially when they view themselves as an expert but right now there's no way to act on that concern there's all this unharnessed social energy in the network that we can't act on except for calling people up and nagging them right and if you call people up and nag them you end up identifying yourself as one of those people who one of those ten foil hat people who nobody wants to emulate right so there's no great way for us somebody who cares about security to use your knowledge in a way that isn't sort of. It doesn't identify you as somebody who's paranoid or ninety or something like that so how can we change that as well so I could have done any of this work with many of my collaborators some of whom are pictured here and more generally I'm willing to take questions. Well you. Work. With. Yeah sure I have anecdotal evidence of that because when people were coming together to make their own sort of thumbprint if somebody suggested something simple somebody invariably somebody else in that group would be like now that's too easy people be able to guess that I didn't specifically empirically say that that's actually the subject of something ongoing that I'm doing right now. How people jointly make security decisions but. Certainly anecdotally I saw I saw a lot of that but you know some of it because thumb print was so new it was maybe a little bit unfounded because what somebody thought was simple and their problem their proposition to improve upon it was not necessarily that much better. Regardless of what. Your personal risk is for them to look for. Yours as well. Organizational social influence social pressures all these behaviors you see you know all those letters like we're getting the governor signed up for this program it's really. What I said there yeah what is that do for the organization. Part of your. Bill. It is very do is I don't really talk about it in this work at all. Because most of my research is focused on the end user. And user context. But I do know from some of my own some other studies that social influence works best if it feels kind of ground up rather than top down. When it. Fit when it feels like the social pressure is coming top down then people tend to sort of react poorly to that they need to think that it's like their ideas their friends I mean it's our idea right right. Right. That. Within the exposer group how many people what. Yeah. Yeah. Well exposed in our expose might be a poor choice of terminology or the original choice of terminology in the actual analysis was treatment group or untreated on treated but we weren't actually treating anybody expose had a very specific definition here which was you had that percentage of friends who use that particular security tool and so and unexposed mentee did not have that percentage of friends who use that particular security tool so we weren't exposing them it was just a way to separate groups. Although if you had all those friends and you didn't do it then that would be represented in the adoption rate of that exposed group so what percentage of people in the expose group did versus did not adopt the security tool was one of the things I use as a proxy for the effect of social influence was a positive negative or nonexistent. I have that data but I don't exactly remember exact I mean equal error rate is basically where. Foreign F.R.R. sort of intersect so that'll be twelve percent false positive and twelve percent false negative rate. But if you want to if you want to. Like you could get as low as. You know one percent false positive but then you also are increasing a false negative it's under there's a tradeoff there that's why I use equal air just just to give it a. Like a sort of more balanced. Approach. No no no it is not meant for anything high risk it's meant for the context that I talked about before. For the behavioral biometrics like a keystroke identification or gate detector or gate identification all of those things are far as if like twelve to twenty percent so thumbprint is at the low end of that but has a social dimension to zero zero zero. Zero. Zero zero zero zero zero. Zero zero. Zero zero. Zero zero. Zero zero zero. Possibly hard to tell from the aggregate Facebook data because you don't pass the individual stories in smaller interview study use cases I did you know encounter some examples anecdotally of people who are experts who. Were in a community like live together with other people who weren't experts and they influenced each other security in some meaningful way. But again I don't have enough data to really make any claims about but I'm sure something like that does happen. No no no. No Every friend yeah. All right cool. Thanks for coming guys yes or no.