All right, everyone, thank you so much for coming today. Welcome back to the cybersecurity lecture series here at Georgia Tech. It's my pleasure to introduce our speaker today does op-amp from the NSA, he said, talk to this all about what he's working on. So take it away design and thank you so much. Thanks. Hi everybody. It is a pleasure to be here. Even if I have to compete with the Braves. Congratulations to Atlanta, congratulations to the team. If you're a sports fans or not. That is a fine reason to have competition today. So congratulations to that. It is great to be here to talk about cybersecurity. I came to you just for the day from Fort Meade, Maryland, which is where my office is at, NSA. To see this great campus to talk to people about cybersecurity, including this lecture, though, I look forward to your questions. There are stickers on the table. If anybody wants to endorse NSA cybersecurity, I hope you'll be a little more excited after this. We might, my colleagues and I like to get out and to talk to people. And we've been doing lots of virtual presentations in the last year. It is great to be back in person to be able to talk about the work that my colleagues do, the things that we're working on. My background, I have a PhD in computer science and I have done a lot of Cybersecurity Research. I now have a more practitioner role and a more of an advisor role. And I work in an organization called the cybersecurity Collaboration Center. Our job is to work with industry and our mission statement is to prevent and eradicate cyber threats to national security systems. Government computers in the Department of Defense and the defense industrial base. These are companies that provide services, contracts to the US government or stuff that we need to do our jobs. So the work that I'm going to talk today is work that I presented at RSA earlier this year and wrote a paper about last year. So you can find more information about it. If you're if you're curious about that, you are welcome to interrupt me as we go. There are also plenty of time for questions. At the end. Though, for a long time, we had been working on cybersecurity, 50 years, 60 years. And it seems like in some ways things are getting better. Things are getting better all the time. The research on this campus and lots of campuses pushes cybersecurity forward and some things are not getting better. This is pulled straight off the internet of the most common passwords by year. And there are unfortunate trends, right? The same common passwords are used year after year. We, as computer scientists, cybersecurity professionals know these are not good passwords and really wish that we were doing better at this kind of problem. There are also good campaigns in educating users about cybersecurity. At the NSA, I take a lot of mandatory training and including how to pick better password, but how to respond to phishing attacks and all kinds of things that we face as individuals and companies every day. That education is probably helping a little, but the problem is not gone. And I think that's somewhat frustrating, certainly to us in cybersecurity to say, why isn't this working? Why is the training that we're giving not being as effective as we really want and need it to be. So common passwords are just sort of one illustration of this. Problems still exist even after a really long time. I am really excited about the human parts of cybersecurity. This is an area I was not trained in as an academic. I have not done original research in the beginning, but about seven years ago, I was an NSA cybersecurity organization and I decided there's this really neat intersection between people and computers. And the reason that we have cybersecurity at all is because of human, human attackers, human developers, human users. All of those people are the reason that I have a job. And I never got to take Psychology. I never took economics, I never took sociology. Things that I have learned that or quite a bit to cybersecurity being effective. So I fell into the sort of stub a community of usable security for awhile. And I love that they are doing tremendous work. Usable security is all about helping users make more informed, better choices in the moment when they need to. But this is kind of a niche community conferences like stoops and Kai, who are really pushing the envelope at a small scale. And I love what they are doing. I think it is necessary, but it is not sufficient. They can help us decide which button to click on the screen, but it still requires a lot of human decision-making, human brain power to assess the risk and the consequences of what happens if I push this button. That kind of internal processing. There's a lot of interesting research topics there that I hope people keep doing. But I'm going to talk to you about something that I hope will supplement complement the work of usable security, which is to make it go, no, go away, but make it invisible to the user. So instead of the user having to make a conscious choice in the moment, that just happens automatically for them. I seen the passport example that he gives him and I see that there's sort of two hypothesis. One is that people can do no better than 123. It's beyond there because I hope is not to or they don't understand, there's a problem. Do we know which one or both are before you go to invisible? Do we have that understanding of people? And so I will get to authentication and quite a bit of depth because I think this is an area where invisible security is starting to show up in real life. To your questions, yes, there is research that shows that the human capacity for generating good passwords and remembering and lots of good passwords. Research shows human brains are not very good at that. They might be good at picking one good password and that's our password managers are so effective. One good password that can protect things that the human brain is a week at. The human brain is not good at remembering a 100 strong passwords. So does that answer help answer your question? What will keep talking about authentication as we go? I would really like for people not to have to pick password. I think that we can protect them without having to succumb to that. It was a fine choice in the beginning, it's very easy to implement. It is a moderately effective when done well and done correctly. But every password will eventually be guessed. That there is no reason that cannot be true given enough time and computing resources. But I like the general umbrella of letting people be secure with as little interaction, as little effort as possible. As well as we'll talk about in a little bit. This is not the right answer for every problem. I will say that up front, cloud computing is not the answer to every problem. Invisible security is not the answer to every problem. But the more that can be done for the user just to do the right thing at the right time. He's a fine goal. Now, of course, there are limitations to that. We cannot probably solve all cybersecurity problems with invisible security. And at the end I'll talk to you about why maybe there are times when we want security to be overt and very, very visible. But I'll get to that. So I'm going to give you three examples where I see this in practice today. And then three communities who put it into practice. Users who benefit from this kind of an approach. One that's been around for a decade or more is automatic software updates. As we all know that software is complex and we don't ever get it perfect. We strive to, but there's always new bugs that come out. And so the users are continually burdened, or at least they were in the past with having to install patches. That's too bad. It required user interaction for a lot of years to go check if there was software updates to install them. Users had a lot of bad experiences with their computers, breaking loose, creating software compatibility issues in software updates. But vendors including Apple and Google and Microsoft learned in real validated research that if they just automatically did this in the background, users were happier and the world was safer, right? Those companies, we have global view like when they looked at Chrome automatic updates around the globe, the number of Chrome exploits went down when they turned on automatic updates. This is of course not appropriate for every situation. Lots of big companies need very careful rollouts of patches. But for average users, for people who just wouldn't otherwise do software updates, this is something that they probably get automatically without even having to click the on button. It just happens for them in the background. So their devices stayed more secure and the user, if they don't care, never even notice that it's happening. It is invisible to the average user. If they do care, they can go in and change it. This is a nice example where the automation is the default, but you still get control if you want control, if you don't trust your software, if you want to validate updates before they get rolled out, you can. It's not that we have taken all the control away from users, but we've defaulted to just doing the right thing. By second example is one called protective DNS says, and this is one that we are experimenting with at NSA for the defense industrial base. So for most users using the Internet, they probably don't know what the NSA's or care as long as when they type in google.com or Georgia Tech.edu, it just goes where they want to go. That's fine. That 90 is just happening in the background. It's how we know the internet works. But unfortunately, a lot of bad activity builds upon the domain name system. Now we're in particular, right, who's looking up malicious domains when that is core to that bad things of happening. So what can we do about it? One is to sort of sit in the middle of this, the domain name lookup system and filter out things we know to be bad. And there are commercial services who do this. There are companies who do this within their own company. Some universities do it on the campus. As your computer looks up a domain that DNS server automatically filters out things. Don't to be bad. The user sometimes doesn't even know that this is happening. Some malware resolutions just don't work. Or in some cases, browser vendors like Firefox and Chrome. I'll show you a page that says, We know this is a bad website or you really, really, really sure you want to keep going. You've probably seen those pages. And so they are sitting sometimes in the middle of this domain name lookup system. So protected DNS is something that has been shown to work in lots of environments. In the UK, for example, the British government offers this for all of their government and now for industry customers. They can sign up and say we would like to use this DNS service. And the ISP, the company just points they're domain resolution to a protective UPI DNS server who the government and the commercial provider automatically block for people things known to be malicious, certainly doesn't stop every attack. It doesn't stop attacks that don't use DNS won. It doesn't stop attacks or we don't know that a domain is bad. But it at least puts a security mechanism in the middle where again, average casual users are just automatically protected. So that's example number two, right? So NSA has done a pilot on this. We are rolling this out to about a 100 yen. Did companies right now. And we are measuring how effective it is. Lots and lots of malware is being blocked. Because the commercial provider and us together are able to protect our defense industrial base customers using this kind of a system. Now we need to figure out how to scale it up. There are something like a 100 thousand company that its industrial base. It turns out DNS blocking is a very scalable solution. And so this one looks very, very promising for us. So to the question about authentication, one way that it is getting more and more sort of invisible to users is with facial recognition. The way that an iPhone or an Android can be unlocked just by looking at it with facial facial IDs. Eases the burden on the user to remember up in a passcode anything else that they have to type in and remember, they just have to look at it. Of course this is not perfect. The implementations are not always perfect. Your biometrics are very difficult to change. It's easy to change your password, but your faces, your face, or your thumbprint is your thumbprints. So of course, there's this trade-off between immutability and usability. But we are lowering the burden on user. And I really like that. Desktop computers are moving to this. We'll talk about use cases, an exam in a little bit. But in particular environments where authentication is a very burdensome sort of step, or people have to authenticate a lot of times in urgent critical situations, it can have real meaningful consequences. The time that it takes to unlock your computer could mean human death. There was a study in a hospital recently before and after a data breach. And the thinking was after the data breach, there would be more cybersecurity. And they measured the time in the emergency room to how long did it take to treat the patient? And the numbers went up by many minutes and the number of deaths went up by a significant amount. Simply, the researchers think, because of the increased cybersecurity. What do we do about that? Continual authentication? The ability to automatically authenticate somebody is another way to do this. I have seen a commercial company, for example, that has a watch. And the watch reads your heart rate and it knows that you are the one wearing the watch. And so when you pair the watch to the computer and walk up to the computer, it automatically unlocks. You don't have to type anything in. It knows that you are you. And when you walk away, the screen locks again. You can imagine in a setting like a hospital, that could really lower the burden of the authentication while still strongly time the user to their active. Though we might get both. Again, there's trade-offs, but I like this idea as one nation. So what are, those are three sort of point to examples. And I think people who are building those technologies are doing them not under any umbrella of usable security or invisible security, but because they help the human experience and they help achieve the goals that we're looking for. For. Yeah, those things are really important. Security and privacy go together. I think they are inseparable. This, this, the kinds of ideas I'm talking today about are relatively general and I think can be done independently. Commercial companies, we NSA are not responsible for doing it for the general public. Our domain is only for national security system. For instance, the we get to control the authentication to our government computers, but nobody else's. And so we might adopt things like the wristband. To make our authentication easier. We can't make anybody else do it. I just think we should think about it. I think as a society, we should think about what that means. Now of course, privacy is an important consideration. And do you trust even what your computer is doing? You trust the software updates that are going to be automatically installed. We have two root r cross somewhere. This is a sort of foundational question in cybersecurity. And I think it's a really important one. Users I think, will reject things that they are not comfortable with or that have bitten them like bad software updates in the past. Those are real burdens. I think that is an impediment sometimes. So the transfer of trust is attribute number one that I sort of wanted to talk about, which is you are transferring your control and your trust over something that used to be manual to an automated system that users will need to learn how to trust. Do you trust that your faith, somebody else's space can't be used to unlocking your personal phone or a photograph or a 3D image or anything like that. We are transferring our trust and that really is an important thing. At the core of this kind of system. I don't want to talk about all of these in excruciating details. I think there are lots of benefits in addition to things like just saving us time. I think we can actually measure benefits in cybersecurity. This is a real interest of mine. How do we know that we are safer now than we were yesterday? How do we know that automatic updates are safer for the world than manual updates? I really appreciate people who study those kinds of problems because I think we shouldn't just do it for the sake of it. I think we should do it because the outcomes are better. All of our time is very limited. Certainly for practitioners, people at the front lines. A good idea isn't good enough. We actually have to show that it is beneficial to us. Most of the people that I talk to you inside my organization, they don't want more tools. They don't need more tools. They want to be able to do their job efficiently with the tools that they have. They want to be able to find more bad things sooner and mitigate those bad things. But they don't just want a new authentication scheme just for the sake of it. We certainly have to figure out milk right metrics and the right measurements to show that these solutions that we're proposing pushes farther in that right direction. On the sense of control, I think some users are going to actually expected that this works. That there's an outsized benefit. They actually will want the authentication, for example, to be better than what they had before. I see this in automation all of the time. I think it's a byproduct of AI and machine learning as well. Users want it to be exponentially better than it was before, even if it is just mitigating risks that they had in the past. Self-driving cars are sort of an example of this. We expect the self-driving cars to be perfect, not just a little better or lightly fewer deaths. We don't accept any crashes. People are sort of concern that they expect this outsized benefit. Invisible security probably falls in that same realm where we expect it to be. Amazing, even if it is just incrementally better. Just like component we will probably have to get over. So let me tell you about three audiences where I think this will be beneficial. The defense industrial base is one where our small experiments that we have control over is showing benefit. It is one, a complex supply chain. There are lots and maybe this applies to other supply chains as well. And it's like a big pyramid. So at the top are big name companies like, you know, there's five or six sort of prime defense industrial base companies. Most of them are relatively small. Some of them, for example, don't even do defence work all the time. So imagine a company in the middle of Georgia that makes Bolts that eventually end up in airplanes or the US Air Force. Those companies probably have very slim profit margins. They may not have any security professionals. But they do have sensitive government data. Sometimes maybe they have the schematics for the airplane or the tolerances for those bolts that make real security difference. And so the government wants to protect its information when it gives it to those companies. But they don't have the time, they don't have the money, they might not have the expertise to do as much cybersecurity as we wish they would. So what do we do about that? We're trying a lot of things. The protective DNS pilot is one, that isn't the only way that they get hacked. And so we're trying to think about what are other ways. That isn't a lot of extra money, It isn't a lot of extra time. But we'll helps protect the data that we care about. Email is another one. We're sort of thinking about. Lots of threats come in by email. Phishing is much more common than zero-day remote exploitation. How do we help with the burden of phishing attacks, for example, that sometimes eventually compromise a US government data. There are perfect public examples of that. So we are thinking about how to lower the burden and still provide the security that those companies need. My second example is health care. And I mentioned the hospital a little bit earlier. I'll point out here that for people in healthcare, cybersecurity is not their goal. Cybersecurity is there to help them do their primary job. But a doctor's job is the delivery of health care. They, they are required by law in the United States to follow HIPAA. But doctors don't get that training in med school. My wife is an audiologist in her doctoral training, never got any training about how to protect health care information. Even though the law demands it. She has a small business owner and doesn't have the time, the money, the expertise, the things I talked about. And I have studied her profession in research studies. And that is exactly the sort of conclusions that the doctors told us, which was we are doing as best we can, but we don't really have the time or the money or expertise. We don't understand the risks. We don't know why it's important even to install updates. They're just going to have a negative impact on the business. So how do we help them do the things that are important? Follow the laws and the compliance regimes that they are legally bound by in the United States in the most secure and effective manner possible. I don't know all of the ways I think use Roth, I think authentication is one. You'd be surprised if you walk into many health providers. They often don't have individual logins for the computer. They all share a single account. The computer might never be a lot, might never be locked, that he's a violation of HIPAA. They are not legally allowed to do that, but they do because they're trying to do their primary job. How do we help them? Continuous authentication, things like wristbands might be some might be one way to do that. I'll be up front. I don't know all the answers, but I think invisible kinds of security for them would help them be more secure. It would help protect all of our health information and allow them to focus on their, their expertise, their primary goal, healthcare delivery. So I hope there's more research that we can do to deliver different kinds of solutions there as well. I will say that some people do this fine. Big university hospitals, for example, have a lot of resources. They have IT departments, they have big budgets. They probably have good security operation centers. And so they probably don't need this as much. In fact, they probably want more control. They probably want to know that the software updates won't affect the ventilators on a hospital wide scale. I think this applies a little bit more too small clinics, people without the resources otherwise. So my third example is the general public. Like health care security is not people's primary goal. They want to share photos with their family. They want to do banking, they want to buy things on the Internet. And we, as the cybersecurity profession, need to help them be able to do that and securely as possible. As we saw in an earlier slide, they can't pick good password. They're not speaking good passwords even if they can. So they are very vulnerable, right? There's a lot of sensitive data that we as humans have on the internet now. Personal, financial, private stuff. How do we protect that? Using the skills that the average person has and supplementing them in a way that isn't burdensome and does it doesn't cause them just to turn off the security because it gets in their way. I have family members who will just disable the antivirus because the computers too slow. Not the solution, right? I cringe at that. You might cringe at that. How do we get past that limitation where they don't, they might not understand what's happening. Most people don't want to spend money on security. They'll use it if it's turned on in the background like automatic updates. But even to get people to buy antivirus software is kind of a lift. It's kind of a list for the average person. Again, the more invisible it can be for the average person, probably the better. This is something we should measure. We should actually make sure that, that it does what we intend. That transfer of trust, like we talked about, is a pretty important mechanism here. Because if somebody gets burned in the sense that their computer doesn't work when this is turned on, they will never accept it. So we have to be very careful about that. Now. There are, of course limitations. This is not the panacea for everything. It is not about absolving users of all they're responsible. I don't think that is the right approach to this either, which is what you can have this magic pill and all your problems are going to go away and you never have to do any. When you're driving your car. Your automobile has a lot of safety built in. It has airbags, anti-lock brakes, and it's quite a secure thing. And you still, for the most part, have to buckle your own safety belt. That is a responsibility still on you as a user To do the right thing. The car doesn't just magically protect you from every accident. So I think we should be careful about the communication of this idea as what you don't have to do anything anymore. You're just automatically protected. That that isn't a really good analogy. The other thing I want to understand quite a bit better our mental models. And this is becoming a growing field in cybersecurity, particularly in the usable security field about how do people internalize what is going on. If I click this button and what do I think is going to happen? I don't know automobiles very well. And so when my car starts making a weird sound, I don't have a really good sense in my head for what's going wrong. And I have to go to the mechanic and to have somebody else tell me what the problem is. Similarly, most people, when something goes wrong in their computer, don't have a good sense for what is there's something that I did that caused that. What is actually going wrong? Is there any real threat to my data or not? The understanding of risk really is it there? We can try to educate people. It just doesn't seem to be achieving the same results that we probably want it to. It also might be possible that people need to see or want to see security to feel safe. And there are a lot of studies in the past about visible police. For example, which is when people see a police officer on the street corner or see a security camera, they feel safer even if that security camera doesn't go anywhere, even if it's not turned on, even at the police aren't doing anything. The presence of security does psychologically help some people feel safer. So if I make all the security Invisible, are people going to feel safe on the internet? I don't actually know. The only example I have is that some antivirus, while it's running in the background, will pop up and it will say, you've been protected against a threat. I blocked a malicious download that you might not have even known was malicious. Just FYI, maybe that is a really useful thing. Maybe it's just good marketing for the antivirus companies so that people know their money's worth it. But I think that's an interesting problem to think about a little bit. How much do people need to see it? Two fields, the fields occur because that does matter. You might also be asking yourself like josiah, you said invisible security for authentication was a good idea. How does that play into Multi-Factor Authentication? Research shows multi-factor is noticeably safer, better protects people that a single authentication mechanism. And I violated that principle little bit. I don't want this. I do think that multi-factor authentication is better. I think we also need to think about how to make that less burdensome on users. The people that the reason people don't adopt multi-factor is they think it slows them down. It gets in the way. Google has a lot of really interesting research on this that says, We really wish people would adopt it for Gmail. And here's the reason we know people don't. I'm not a 100 percent sure how to reconcile my idea with this. But I think we need to try and achieve both. I think it would be great to be both minimally intrusive and maximally secure. Those might be conflicting goals. I don't quite know yet. That would be a really interesting study. So what comes next? I said there's a couple of examples that exist today. Is that good enough? I would suggest, no. I I would suggest there is a lot more that can be done to make other things invisible. I don't know what the listed. I don't know what all the solutions are. That is a good research topic for anybody looking for a topic. I hope commercial vendors keep working on this. I am always thinking about it because I want security to be as effective as possible. And I know that the intrusive this does get in the way of that. That's my real concern. I think authentication trends will continue to evolve. I'm not sure that phone Face ID is the end result forever and ever amen the probably will be new things. And I think part of the consideration for what comes next to be this consideration for lowering the burden on users. Another one that I think would be really interested in is how machine learning and AI can contribute to invisible security. How can those automated systems in the background help detect threats, mitigate threats, alert users even of threats without having a burden on the people on the front end of those systems. I think there's good work towards that goal. And we sort of have to think about it in the frame in the mind of how does that help automate or make invisible things that are just sort of happening automatically background. Another one that I would like to see is reputation services being more automated and be in more hands off in the background. Particularly for users that can benefit from that kind of a system. Again, for enterprises that really can pay active hands-on attention to the threats against their network. That is great. That probably provides more granularity, more fidelity to what's going on and the outcomes are better. But for a small business or a flower shop who does not have the resources to do that? Can they partner with an ISP who can use high-quality threat intel? To understand what are the threats against commercial versus financial or banking versus hospitality and Taylor cybersecurity sort of automatically to whatever the context is. I think that can be done more in the background. Or we can build service providers who can sort of help to do this for people who just wants to buy it as a Service. Cyber security as a service, I think will become an even greater industry in the future. That is a little bit how we are trying to help the defense industrial base cybersecurity as a service for them. If they can't afford to do it, if they don't have the expertise to do it, Can it be offered to them as a service? So when I talk to practical audiences, these are the things I'm sending them home with. I wanted you to be aware of this because I think it matters who you as users, but also to you as researchers. I would love to know what you would advise as well, and I could probably come up with a similar list. I didn't make a slide for you on what research should do in the next five years or ten years or 20 years. But for average users from the audience that I gave at RSA for example, I said you as business owners, for instance, should go and review the opportunity sprint visible Security. Look at your environment today and see what is the most intrusive part of cybersecurity in our ecosystem. What could be automated, what could be made invisible? I want everybody to make this determination for themselves. It isn't my job or my role or my responsibility to just make it. So I think we all need to make that choice. Then I think users should start to implement them where it's possible. If you agree that automatic updates are right for you or your enterprise. Cardamom, see how it goes, measure how well that works. Especially how much more secure, how many fewer incidents did we have after we turn on automatic updates them before? If for some reason the number goes up, reconsider that choice, or try something else. I like the idea of little science experiments, even in the real-world. I wrote a book a couple of years ago called essential cybersecurity science, which is really for practitioners who don't know how useful the scientific method is in real life. It's not just for universities that really, these kinds of small experiments are really useful for people in practice. I think we all should encourage that when we talk to people. And then in the longer term, I think the development of new solutions is not a fast solution, is not a fast idea. I would be surprised if all of us smart people in this room could just go home tonight and crank out a new one. Because it's hard. I think we need to study the problems that a little bit. But the collaboration with academics, with innovators, people who can sort of think outside the box. In the beginning, automatic updates were not well adopted. People were scared of them, they did not like them. There were problems with like manipulation of downloads that weren't authenticated in the beginning. And that solution matured. It evolved over time. But now is a great time to think about it. All the innovators in the eyes of you have here. I would love to brainstorm with about how could we do this better? How can we sort of fall under this umbrella of invisible security to improve the outcomes that we have. So those are the examples that I have today. As I said, I don't know all of the answers. This is an idea I'm trying to get people to talk about. Nsa is doing a little bit. I tried to give you a little peek behind the curtain of the things that we are doing within our own realm. But we don't have any monopoly on this. I think it's just an idea that's worth talking about. And I would love to know if you agree, if you have ideas, people can certainly email me at anytime. If you think of things later. On the on the bleachers first, just 11 second before he asked questions on the mic for the remote audience. So with all these invisible security measures that you are giving a trust, do these invisible security measures like in light of like the solar winds breach? I feel like this actually can like this type of trust you. You have to be aware of, like supply chain attacks and other types of attacks that, how can, I guess my overall question is, how can like users actually trust the invisible security measures without actually like knowing what's actually going on in the back-end and stuff like that. Sure. That's a great real problem. And when I don't know a 100 percent the answer to, I think psychologists have a lot to say about how people establish trust root trust. I, personally, for example, when I go and buy a car, I don't research the ins and the outs of how the anti-lock brake system works. I just trust that somebody has developed it well that the government may be has done certification, but there are consequences if something goes wrong. But I don't know precisely how it works and yet I have some trust in that system in my automobile. Now, does that generalize to cybersecurity? Maybe, maybe not. I think most users, when they buy a new computer or a new phone. Do generally trusted, and I don't know why. Why should they trust that? Why don't they understand what their risks or everyday is? That is one place that I think people in cybersecurity do a really or job to be frank with you is we feel like we seem like fear markers. We go out and we tell people, here's all the horrible things that might happen to you without giving good context to what is the likelihood of that threat. We go out and we talk about how bad SolarWinds or ransomware or anything else. That's a real important new emerging threat. But we don't talk about what is your likelihood of getting ransomware today? To be fair, I don't think people understand the likelihood of getting hit by a bus or being in a car accident either. But we don't have a users in general don't have a firm understanding of the threat. So we do a disservice of cybersecurity people to just go only talk about threats without talking about mitigations. And likely. Though I try and couch my presentations, especially to non-technical audience is a little bit more that way. The eye doctor and your question a lot, but I think it's an important one and I think it is a place where we need to bring in other areas of expertise like psychology to help us understand, I guess. In regard to the rural biometrics plays in invisible security. How, like from the organization side, how can you write? How do you assess the privacy risks regarding biometrics in authentication, for example. This is not my area of expertise. I know there is a lot of research on that kind of trade-off because it is an important question up, as I pointed out, right, the immutability of biometrics is a real concern. Is the usability, does it, does it outweigh though the likelihood of that risk? I don't know, but I think that is a core critical question and we shouldn't just blindly deploy biometrics everywhere without understanding its answer. With regard to protected DNS and facial recognition, how does the NSA plan to address concerns about the dual use of these technologies? For example, I could see like public concerns arising from, for example, a protected DNS being used as a censorship tool or facial authentication information being used for surveillance purposes. Yeah, you're absolutely right. Almost every technology on the planet has dual use. It can be used for good and bad. It's all about intense to me to be frank. And so it's somebody who deploys a technology intending it to be malicious. It will be. We probably can build in some protections against that. We are in favor of as good of security as possible. We put out a document, for example, during the pandemic about how to pick collaboration platform. And one thing we said was, you should prefer to have that collaboration platform that has end-to-end encryption. In some might say that is contrary to NSA's foreign intelligence mission. Cybersecurity dominated that conversation. And we said it is better for all of us to have end-to-end encryption then to have any, any other choice. Security is always our default. That is always what we are going to pick first. But yeah, Of course there's tradeoffs and we are very cognizant and careful conversations about that all the time. Good question. Other questions for Miriam? I have a question actually. So to what degree? You mentioned things like secure DNS. So they're obviously industry solutions to security and asters like this clot nine, right, DNS server that does, or towards a similar goal. And there's also academic solutions, published literature, prototypes, that sort of thing. To what degree do you see NSF as our MSA or re-implementing these techniques that are maybe available by industry, are available in academic research completely in house versus reaching out to an industry partner or reaching out to an academic partner. How does that collaboration work? Yeah, that's a good observation and I will be honest. Decades ago we built everything in house. We probably replicated a lot of software that for any number of reasons before by 10 year the government said, we're just going to write that on a road. That pendulum has swung quite dramatically to the adoption of commercial off the shelf collaborations with industry and academia. Because it's a waste of taxpayer money. It is not a good use of our time and our talent to rewrite software. We should be spending our government resources on things that are we are uniquely good at. And we should be leveraging the things that industry and academia and even other parts of government. They can provide those delusions. But I think that partnership helps make all a bit better. So yes, we don't want to reimplement. And so the protected DNS system, for example, we use a commercial provider. We contract that out, we bid commercial companies to do it. We probably could have done it ourself, we didn't even try. That wasn't a good use of our taxpayer body at our time. But we partner with people to try and make sure those systems are as effective as they can be. That the special knowledge that we have helps improve them. Because we want them to be better. We want to use our unique perspective to help it, to help with that. But that's a great question. What might the office that I'm in now, the cybersquatting calibration center. That is our job. It is our job to go find people who will sit down with us at a table and do that kind of work together. Whether it's, let's develop a new analytic to detect to the new threat. Let's think about how to mitigate this kind of emerging situation. How can we protect the defense industrial base with them? It's not us telling them what to do. It's that I'm actually sitting at the table with us and saying, when we look at our data, we see this and they, as a partner say, this is what we see and both of us together are better as a result of the partnership. I think senior government people have been saying this for a long time. It's nice finally, just see it in practice. Yeah, Thank you. So I also have a question from the online group. So someone asked if he could speak a little bit more to the immutability of data and why that's an issue. For example, with biometrics, they can be reliable just because they refer to a unique person. So questions on the immutability of data, right? So the trouble is if you were fingerprint gets captured or compromised, you don't get a new one. It is very easy to pick a new password. If your password shows up in a data breach somewhere, you can just set a new password. But the only way to log into your bank or your computer is with your face or your fingerprint, and that gets compromised. You're kind of stuck. Though two-factor authentication is sometimes a way around that. It has been a limitation to the adoption of whole biometrics for a long, long time. Other questions, anything from the room? Well, let's thank our speaker. Thank you so much desire that wraps us up for today. See everyone here next week, same time, same place.