Welcome everyone to another Friday here at the School of cybersecurity and privacy at Georgia Tech for the cybersecurity Lecture Series. This week we have one of our in-house people, Professor Jon Lindsay from the School of cybersecurity and privacy. I am honored, and this is going to be a very excited, exciting presentation. Jon, you're going to talk to us about your most recent publication from international security, which is the flagship journal in international affairs, where Jon focuses all of his research. So please take it away. Alright, thanks very much. It's really fantastic to have an opportunity to speak to this audience and in this forum. Clearly we have a lot of great lectures here on cybersecurity by which we usually mean the security of cyber systems. But it's important to bear in mind that we can also talk about cyber in security, the use of cyber systems for the broader international security world. So let's be a little bit what I will talk about that will be some more traditional cybersecurity concerns. But we're gonna kinda broaden the aperture and look at the applications of artificial intelligence for war. So I'm really happy to present what is a paper just published there? You can find it, It's ungated. And I'm happy about this paper for many reasons. It's my first Georgia Tech byline, so that's cool. Also. This is a very interdisciplinary collaboration. My coauthor of a Goldfarb is an economist who studies the impact of artificial intelligence in business at the University of Toronto. And this was really one of the coolest interdisciplinary collaborations that I've had. I think we've learned a tremendous amount from each other in this conversation. So I think that's really kind of exemplifies the spirit of interdisciplinary collaboration that we're trying to get going here at SCP. So no secret to the people in this room, artificial intelligence is a big deal. Tremendous technical progress. We're seeing milestones fall. People are familiar with the old Ising to Alpha, Go, right, so it's not too far of a leap to go from AI transforming Uber and Lyft in Google Maps, right? To thinking about the imaginaries of Hollywood, right? So terminator and how bright give us this idea that artificial intelligence will be substituting for human warriors and becoming a major threat in the world. Now, this is backed up by a number of news stories, actual concrete uses of artificial intelligence systems in international relations. So this one refers to a ground-based system with a sniper device that had some AI to stabilize and the aim, the, the rifle at Target, which was the lead engineer and the Iranian nuclear program. Because there was about a 2 second delay from the operators back in Israel. So, so AI here is being used not to replace, but to augment and extend human abilities on the battlefield. There's also a wide belief amongst world leaders and most kind of defense analysts and poly professionals that artificial intelligence is going to be a major factor in economic and national security competition going forward. So we're seeing a huge investments in China and the United States. Of course, Vladimir Putin's famous line that whoever is the master of AI right will become the ruler of the world. Now, Vladimir Putin's war is now turn the entire world against him. This has nothing to do with AI. And I'm going to use the opportunity of this presentation to talk to you a little bit about AI in general. But then we will do a bit to this unfolding conflicts and talk about some of the very tentative implications for thinking about artificial intelligence in war in general. Now there's also an emerging literature in international relations and the larger base of people that are interested in the nexus between AI and war. There's a couple of titles out there. Even Henry Kissinger has recently gotten in on the game, new book about artificial intelligence. But here are a couple of broadly shared assumptions about what the effect of AI and war will be. Number one, kind of the core assumption that most people have will be that AI systems will fundamentally be able to fight and out think human beings, right? So AI forces will be able to create swarms. They will be fighting, quote, at machine speed. This means that war will be unfolding too fast for humans to control. You might want to have a human in the loop, but if your adversary has AI systems that are able to out fight you, right, there will be incentives to start turning over the reins to AI. Hey, This will create all kinds of coordination problems of states and coalitions. Try to figure out how to manage their AI systems. And this creates some real worry, some problems when you think about strategic stability, which is a word we use to describe the likely hood of deterrence failing and war breaking out. All right, if you have artificial intelligence systems that are faster and more lethal than human operated systems, there is an inherent incentive to move first and first-mover advantages tend to incentivize preemptive or preventative strategies. And that means that there could be a rush to conflict. Okay? This could be especially disastrous in the nuclear well, perhaps making inadvertent escalation more likely. And then all kinds of different scenarios that you can talk about in the cyber conflict realm as well. For stories about who's going to care about artificial intelligence, right? Normally authoritarians like Vladimir Putin, right? Have a problem because they want to have an army. They want their army to be effective on the battlefield. But the guys with guns might also turn their guns on the leader. Authoritarians often engage in what we call Qu, proofing, right, which is making it very difficult for their militaries to stage a coup. But in the process, they also make them ineffective on the battlefield. And I think there's a lot of evidence that we have seen some of that in this particular war with abject failures to plan for the war that they find themselves in, and to inform the troops about the kinds of missions that they were being asked to do. Okay? So there's all kinds of information pathologies that are associated. If you could rely on automated machines, they might be both loyal and effective. If authoritarian states relying on AI now democratic states are probably going to, To do the same thing in order to keep up. But they also might want to rely on AI so that they can keep casualties down. There's an idea that democracies are casual V over. Alright? And then there's this bigger story, right? When we think about not just narrow AI or AI to eat up information processing, but general, artificial general intelligence which meets or exceeds all human capacities that maybe we'd be looking at. Some really alarming scenarios were super intelligence will enslave or out-compete or destroys. Not going to talk about that too much if you want to. In the Q and a, we can go there. But I do want to focus a little bit more on what we're actually doing with AI. Now, all of those possibilities are interesting and important and worth pursuing. Write a future of automated forces may indeed be incredibly destabilizing. It may be proactive to democracies and authoritarians alike. But I think it makes some significant assumptions about what warriors actually do. So if we put aside the question of what an automated forced looks like and start asking questions about can you actually automate the force? Is war the kind of thing that will support this kind of automation. This brings us into a different set of questions. And we'll look at the history of information technology and military affairs. What you start to see is not technology replacing people, but people using technology in new and creative ways that creates new jobs, new military specialties. This is a picture from one of the radar stations during the Battle of Britain. Actually, this is one of the experimental stations where they did a lot of the work needed to get radar up and functioning. So you had scientists working with operators doing operational research analysis in the middle of a hot war. And what I think you can see here is what a social, or what a dense set of social interactions we're looking at here. Okay? People that are listening to information that are communicating it, that are participating in the systems and cells that are debugging it as things break down. Okay, not much has changed. The world has become more complicated. We have more sophisticated digital technology, but we still see societies of people being replaced by technology but interacting with its, alright. So this is the inside of Reaper drone control. Then when this was originally built, there were six screens, there's now something like 16. Most of those started off as hacks that operators brought in started to customize. Generally more and more professionalized in this process as operators started to figure out, Oh, this is what the war looks like. This is what our mission actually looks like. What additional technology can we bring in to start and solve some of these kinds of problems? So when you look at information technologies in war, what you find is information technologies in the social practice of war. And finally, I think is very important to bear in mind that warriors don't just fight wars at an operational level. They are mobilized in a strategic and political effort. And there were often a lot of stupid mistakes and bad strategies. That go into those wars. And let's bear in mind, right? The human costs and tragedies that could always be kind of foremost in our minds when we're thinking about technology and war. Though, this paper is essentially a marriage of two books, okay? On one side, we have a book by Avi Goldfarb and his colleagues, picking an economic look at artificial intelligence. And the other side is my book on information technology and military powers. So one of our mutual students got us together. We started talking about this and 2.5 years later, raising speed of social science. This paper was the result. So let me kind of outline what the general argument is by kind of looking at both halfs of those. There is a huge literature on the economics of technology and innovation. And at a very high level, these are kind of some of the insights. We do see, technology replacing some jobs in history, okay? These are substitutes about automobiles replacing horse carts. But the deployment of technology often depends on and creates new activities. We'll call these complements. You're going to have automobiles. Do you need more people that understand mechanics you need as full role is you need people to build and manage that infrastructure. So as technology tends to drive down the price of substitutes, those complements become more and more valuable, right? So auto mobility makes mobility cheaper, okay? But that makes all of that infrastructure more valuable. Alright? So the economic impact of new technologies in study after study, right? Really kind of highlights the impact of complements. Complements are what are driving the world. So AI is no exception. We're going to talk about what those components are. Very general insights from the literature on military or innovation. It draws a lot from the economics of technology. So it shouldn't surprise you that there is some overlap, right? So there's this shared finding that weapons depend on complements, right? The way your defense industrial base is organized, who's building those weapons? How much they cost the organizational capacity of a military to, to want, to absorbed a plan to actually figure out doctrine for employing those weapons. The things turn out to be really, really important. And when you look at the quality or quantity of weapons that are used in actual military campaigns. Okay? There's not a clear relationship between those technological factors and battlefield outcome. Alright? So we can all think of examples where high-tech militaries lose, right? We just saw the United States pull out of Afghanistan after 20 years, right? And there are examples of low tech combatants winning, right? Often because they're competing not on technology, but on other things like Resolve and a willingness to suffer and absorb and impose costs. Alright? This literature emphasizes the non-material factors like operational doctrine, organizational culture, the morale and cohesion of forces. Okay? We're seeing this big time in Ukraine right now. Okay, huge overestimation of Russian capacity, looking at material factors, underestimation of Ukrainian factors. But we see a demoralized and confused Russian force and we see an incredibly motivated Ukrainian force fighting for its homeland. So the non-material factors are to study, incredibly important. Okay, so let's put these two things together and here's the general argument. Ai is a substitute for prediction. I'll talk more about what I mean by that. But it requires data and judgment, true in the military realm as well, except for military organizations operate in a very different context. They have a different business model as it were. Okay? Militaries need a lot of information that's going to make AI useful. But war is an extremely uncertain and by definition, controversial endeavor. Okay, So let's, let's kinda go back to the basics. Here's kind of a control loop, right? Everybody, It'll be familiar with this. You've seen it in one way or another and you're kind of basic control theory classes in economics. This is called the decision cycle in military affairs. It's called the OODA Loop, fun thing to say, right? Observe, Orient, Decide an act. And it has these, these four components, right? There's reality out there. You need to understand it, you want to shape it, okay, so you need to get data about that reality. You need to figure out what is happening. Recognize things that are in the world. Make some estimates about what's going to happen. There's something you want to do, okay? And then you're going to take action to influence the environment and you want to then measure those effects. Okay, So we're going around and around and appear technologies are useful in lots of different places here. Ai, right? We're going to focus on this internal prediction or who to orient face, okay. Now theme, decision problem, that's a universal cybernetic phenomenon, but very different decision contexts. Okay, I put these two guys up here for James who's in the front row. He's my geopolitics course where we look at what old dead philosophers had to tell us about cyber warfare. And I put up these two guys, right? This is John Locke, patriots, fate of classical liberalism, right into bribes, the importance of property and the rule of law, legitimacy in economic transactions. And on the other side, right, crusty, a very bitter old Prussian right. Carl Von Clausewitz, who's famous for talking about politics, excuse me, war is being politics by other means. Politics happening in a world of Bob friction and confusion. And these two guys are describing two very, very different environments. And I highlight this because the examples of AI success that we have and that strategic thinkers often appeal to Alpha, GO, uber, Amazon, what have you write are operating in Locke's world? Hey, this is a world where institutions are providing stable transactions, common ways to measure things, okay? There's reliable contracts and the breaks the contract you take them to court. There's common standards and measures. All right? This enables frequent, predictable market transactions. There are some important exceptions, right? In the clause Vinci inside right, rather than institutional factors were worried about surviving in a very difficult world, right? International relations, we call this anarchy. It means you're responsible for yourself. You can make some allies, but at the end of the day, right? You need to really, really be strategic in how you're thinking about power, conquest, denial, deception, right? Occasionally, you're going to use war for your political objectives. War is a very rare event in international affairs. Low frequency, high consequence, okay? It's chaotic, it's violent. It's very different than its Lockean world. But military organizations are institutions, okay? They're kind of, they've got some of this inside of them. So there are some aspects of what militaries do that look a little bit more like the commercial world. And that's going to be one of the key intuitions I want to convey to you, right? Aspects of the military that look like the commercial world are probably your best candidates for automation. Okay? So this eye chart is kind of one chart that outlines the entire argument of the paper. In the middle here, where we've laid a little bit decision-making, okay, here's the OODA loop. Rather than a loop kind of have all four of these aspects, data, judgment, prediction, action right there coming together to produce decisions. Okay? So that's the same, okay? But then we have this unique political and technological contexts, right? And so this paper is about how that unique context of international relations, right, shapes decision-making. Now, I've put robotics and the action arm kind of in dotted lines over there. We're going to be talking about AI, machine learning, which is the aspect of AI, right? That's really getting all the attention, which is improving our ability to make predictions at lower and lower cost and greater and greater scale. I'm going to talk a little bit less about drones and robotics, which are using prediction and ML, but really are also automating the action side. And then we'll, we'll look at some of the implication from having a different environment which shapes the kinds of information that you can have about that environment. And your internal institutions, preferences, organizational features that are going to shape judgment. All right, so that's the overall model. And I'm going to make three arguments. Number one, this is the economic argument that in economics, as in war, AI is making prediction cheaper, but that is making data and judgment more valuable. Argument number 2, AI can improve some decision task. There are features of military organizations. There are military activities that AI can improve, but it depends. Uncertain condition, right? In international relations, we rarely like to say the world is this way or that way. We're always asking under what conditions is that true? Though this debate about does AI matter? Does it not matter yes or no? That's the wrong question to ask. The question is, under what conditions and AI provide an advantage in this particular task? Okay? So good news. Mixin is thin yellow. The bad news perhaps is that if you get good at using AI in your military organization, means you've gotten good at providing these data and judgment complements, right? You're going to have to pay the price has increased organizational complexity. And in a strategic interaction, any source of strength becomes an attractive target for your adversary. So if data and judgment are sources of strength, they will also be aspects of competition which will also tend to increase the complexity of military contests. Hey, this is a different story. Then. Rapid, fast robotic wars. So let's dive a little bit more into that first argument. Again in the military context. First slide. Again. People in this room are very familiar with this. There will be people in this room that know more about the technology of machine learning that I do. This is really going to be kind of the one technical slide that we're looking at. Because what I want to focus on for the rest of this are those complements. But when we talk about AI, or at least when we're talking about AI in this paper, we're talking about machine learning, okay? A particular set of techniques that from an economic perspective or providing a more efficient form of prediction. We're talking about narrow AI. We're not talking about artificial general intelligence, Superintelligence. We're talking about neural networks and deep learning and that suite of technologies, not good, old-fashioned AI, which is more about optimization and theory proving rights is kind of a different approach, the earlier approach to AI. And what we mean by prediction is filling in missing information, okay? That's different than statistical modeling, right? When you've got some specific parameters and you're trying to model the effects of those factors on whatever your dependent variable is here, right? We're filling in that information inductively. So that's all we mean about prediction. And these techniques have been around for decades, but it's these big hardware trends. More compute, more memory, better bandwidth, cloud data that can make a lot of that available. Okay? That's kind of driving a lot of this and making this commercially viable, right? The price of prediction is plummeting. Okay? Lots and lots of applications, right? Lots of cool stuff happening here on campus, right? Figuring out what you can do with prediction, everything from pattern and image recognition, malware detection, route planning and navigation, okay, playing and winning multiple kinds of video games. Targeted advertising, right? Blessing or a curse, depending on what industry you think of or what side of the privacy line that you live on. But we've got lots and lots of different applications of dropping prices in privacy. Okay, let's look at the first compliment. Number one, if you're going to build an ML model and if you've built in them a model, you notice you need thousands, if not millions and billions of examples of relevant things in order to generate those predictions. Okay? Talking about filling in information. It may be forecasting, but it may just be image recognition creates the same problem, okay? Obviously, some relevant data must exist, okay? If the organization or no organization has ever done that, has ever done the thing that you're trying to predict, okay? You're just not going to have data that is about that thing. The existing data must be unbiased. We know this is a huge problem in machine learning, right? So we've seen examples in facial recognition, right? Where the machines are trained on predominantly white faces. Okay. And so they have trouble. I'm classifying people of color, right? Or they classify them and really offensive ways. Okay? Data and data processing capacity, again, have to be available or you can't get to your data, okay? Doesn't even matter of relevant data exists or BIF like you actually have to get to it. Another example of kind of unbiased data, right? So Amazon famously tested this AI that they were hoping would improve their ability to Gan resumes a. So they fed and all of these examples of people that had successfully applied to Amazon had been very, very good. They're like, Okay, this will help us find these great candidates. And they recognize that the machine was rejecting all female candidates because Amazon didn't have that many successful female candidates. Their past. Okay. And it was just so bad that like anytime anybody mentioned something that would be kind of stereotypical, gendered, right? The AI was rejecting though, so Amazon made some tweaks to not actually implement that. Again, you kind of have access to the relevant data. Okay? So Clausewitzian world, right? This is the fog of war, right? Tremendous uncertainty about the external situation. So there may be no intelligence. The intelligence may be bad, partial, missing, okay, there maybe too much information is difficult to make sense of what's going on. Closets talks a lot about different kinds of friction. People becoming tired, afraid, thinking with our heart and rather than their head, things breaking down wagons not showing up. Okay? Political friction amongst different parts of the organization. So friction can be a result of enemy action that's breaking parts of the organization. It can be a result of just accidents. I also wanted to highlight kinda this category of information friction, right? Which clearly we have all this information technology is supposed to reduce the incidence of uncertainty, but breakdowns and distortions in those systems become another source of uncertainty. Problem number two, by judgment from an economic perspective, what we mean is the specification of the utility function, right? More colloquial language, we're talking about values, goals, objectives, right? The things we want, the things we care about, the things that we want to avoid, Okay? Values determine what it is that we want to predict. Determine why we're going to predict them. And it determines what we're going to do with us predictions once we have them. Okay, so maybe you have a really good weather forecasting AI. Okay, good. You bring an umbrella to work. Well, that depends. Like having the hassle of bringing an umbrella. Do you mind getting wet? Okay. If you really hate getting wet and you're not worried about the hassle and you get a high forecast. Great. 75 percent means bring an umbrella, okay? If you really find that to be a hassle, you don't mind getting wet, right? That same forecast me lead you to not bringing an umbrella, right? The AI is not going to tell you what your values are. In that case, lots of work on the infamous trolley problem, rights to the AI swerve to kill for kids on the side of the street or kill the occupants of the vehicles, right? That depends on what your value for these different occupants are. And we've actually seen a couple of cases of Tesla crashes, right? Where the machine was saying, okay, there's, there's a small probability that I'm going to crash and they end up crashing, right. Killing somebody is in Utah, right? And, and and again, like it made the decision but somebody else had coded what that threshold was. Me, I said, Well, I'm below that threshold. I'm just going to keep on driving. Tragedy ensues. Okay. Maybe somebody else would have made a different value judgment. Okay. Clip what we call clear judgment, right? Things that are defined in advance, okay. That can be clearly articulated. That's a better, more useful form, a judgment. You can give that to the machine, you can build that in. There needs to be consensus amongst the people that are using this, okay? On the upside, right? If you're in a world of tremendous fog and friction, Clausewitz famously says, Yeah, you can have an organization. You can do some routines, but the end of the day you need this mythical quality of genius. Genius means like Brew all that confusion is the right thing to do. You have some really good intuitions. You have the determination to follow through these ideas, right? It is very, very romantic notion. But what causes is trying to get at is, Hey, you know what? The things that commanders due and commanders do well, right? Are really, really hard to build into the standard operating procedures that can be totally gamed out in advance. All right, this has been enshrined in a current concept called mission command in modern militaries like the US and Israeli military, right? Where you come up with objectives, but you'd say very little about how the troops are actually going to get there. And you empower junior commanders to exercise a lot of initiative in getting the job done. Okay. The economic language with her, this is incomplete contracts, right? You don't know what the transaction actually looks like until people get there, and then you will negotiate that at the last minute. Though a lot is kind of left up to local interpretations. Alright? But there's a diffusion of leadership task right? Across different parts of the organization, then it's going to be more likely that you're gonna have disagreements about what the organization wants to do and why. Okay? So if lots of people are involved in defining the mission, okay? If management is spread across, not just units that are in theatre, but higher headquarters, maybe even reach back organizations in the United States or nato headquarters, right? Then you've got a much more complex distributed managerial problem. And then you've got this last problem of motivating and socializing the truths, right? Mission command only works if you have junior soldiers that understand that this is an important objected to have that they shouldn't kill civilians along the way, right? That they should generally try to look out for the welfare of their comrades, but occasionally they'll have to make some sacrifices and, and hard choices. Okay? This only works if you spend a lot of time, right? Socializing and helping people to think about what it means to be in these kinds of situation. Emulating them and tell them war stories. Big sociology on that. Okay. Thank you. Then number 2, taking these parameters into account, right? Ai is going to improve some decision task, but not all. All right. Political scientists are very simple people, right? We think and kinda two-by-two boxes. So here we have data that can be high quality or low quality judgment can be clear, are difficult. And this gives us four really, really basic categories. All right, that's the case. This is the world where, you know, we'd really like to have our AI applications living and thriving, fully automated decision-making is efficient because you can give a clear goal and you've got lots and lots of great data available. Down at the bottom, right, automated decision is not feasible if there's no data or bad data. And you can't figure out what you wanna do. Okay? So humans have the advantage there. And the off-axis kind of combinations are interesting. Because you have one case, right? This is where a lot of kind of AI failure stories come from, right? Is where you have clear judgment. You've told the machine, this is your utility function, this is your reward function, go forth and do great things, but the data is bad, right? And so you have automation starting to do very, very bad things, right? But then you've got this box down here or sexual were a lot of applications, real-world usable applications work. We've got a lot of good data. But figuring out what exactly you want to do with it, right, isn't always obvious. You can start to build human-machine teaming relationship. Machines are providing predictions and providing decision support, but ultimately human beings are making those decisions. Okay? So automated decision-making rights. So this means we have good data and clear judgments, lots and lots of business examples, right? So Rio Tinto, right, has these automated mining trucks. There's no civilian traffic. It's pretty obvious what the trucks need to do if the ways cleared, go forward. If there's another truck in front of you, stop, right? Stay on the road. A pretty basic control, really well-constrained task, right? And AI does fantastic there. The Follow me drones that you've played with or that you've seen in sports, right? You know, there's a very clear thing that you're asking for it to do, right? They above me 30 feet away, right? Fly yourself around, compensate for the wind, right? That robot can figure out how to do that, playing video games, okay, how you actually win, right? That's a complicated engineering problem, but at the end of the day, right? Get lots of points, get to the next level, okay, you can define how to do this. Military bureaucracies, right? Are incredibly routinized things, okay? Almost too laughable extent sometimes, okay? And they exist to buffer out the effects of anarchy, right? There's a reason why you have inter operating procedures. All right, so any casts that are analogous to civilian organizations are probably going to be your best candidates for automation. I think that's going to be things like administration and personnel, logistics and sustainment. Okay. These are not the high profile terminator and and how and C3PO type applications. Okay, these are the more boring backend administrative things that make organizations work on a daily basis because there's lots and lots of information. And those tasks tend to be really, really routinized and standardize big caveat, right? Bureaucracies fight over resources. They may not be sure how they're using it or how they want to use it. And when we're talking logistics, that should be painfully obvious to all of you. Looking at the Russian situation right now when you have extended supply lines that are exposed to friction, enemy action, okay? Those become anything but predictable and internalized. Let's go on to that opposite category, right? Bad data, bad judgment. All right, we don't have any AI examples here because AI is really not well suited for creating and leading companies or political movements, right? Setting new legal precedence. Okay, that's kind of a blank category. I would say kind of like anywhere where you really maximize caused its world. Okay? Um, AI is not going to be too heavily. Here's kind of, there's Carl looking forward to 100 years later and forecasting where he thinks AI will and won't be useful for, right? Strategy, command leadership, these classic military functions, right? These will remain kind of the human advantaged applications. Now, premature automation, this is where a lot of the debate is in the AI and war literature. Okay? You've got machines had been given clear goals, but they're operating in ambiguous dances. Okay, We've talked about the Amazon, Amazon example. You may know that a chat bot, which became a vile racist really quickly because it started learning from things that were on, on Twitter and the internet, right? You have early versions of chess AIs that learned like, I'm going to play chess, I should sacrifice my clean immediately, right? Because all of its examples had all of these grandmasters that we're doing that but for a reason, machine didn't notice that reason. Okay. In the, in the military AI world, right? Kind of a horror story is kind of the Robocop scenario of the machine. Thinks that it has a valence hard it going to pull the trigger, but it's completely in the wrong situation. Okay? Tragedy in dues. And you end up with a machine that can make decisions quickly, but it can make bad decision very, very quick. Okay. Lots of military prototypes in this area, right? Loitering munitions, unmanned fighters, drone swarms, cyber defenses. These, these look a little bit like this, but when we start kind of actually looking at many of these examples, we're going to find actually that they fall in this last category of human-machine teaming, which is where you have human beings that are usually more engaged in that decision cycle in some way, shape, or form. Okay. So lava examples right now, Google Maps provides the Uber driver with the route. But the Uber driver is going to figure out. We really want to go there. Hey, passenger, Is there a way that you prefer to go or ears? I know that there's construction in this area, okay, So you're getting some great predictions. Those predictions are putting taxi drivers out of business, right? Business model only works because you have human beings that can provide that judgment component at the last mile. Okay? I think that there's a lot of great application here in the intelligence and Operations world because these are data rich environments. A data rich environments about things that require really difficult, nuanced, tricky decisions. Okay, so target and threat recognition face an emotion detection to figure out what's going on with the crowd, to figure out what your intelligence situation looks like, an automated translation system and figure out what you actually want to say and what's worth doing. Similarly in the operational world, right? In fact, many of the applications in the operational planning world maybe even better than the intelligence world because you can at least control your own forces on how and when and in what format that they're reporting, right? Sometimes it's hard to get your intelligence targets right to behave the way that you want them to. Okay? So there's some great rich data opportunities here. But you've got to have people that are in or on the loop figuring out what's going on. Now, the problem of decomposing tasks into things that are in that human advantaged box or that automation advantage box, right? This is also going to be part of the teaming mission. Here are our unit of analysis is the decision task, right? And you can kinda recursively decompose this, okay? But figuring out where and when you want to manage the cognitive load between humans and machine, that also is going to be a matter of judgment. Okay? So summary of kind of what we just talked about, right? Or the general argument is that we have this environment, right? When you have a stable and largely cooperative environment, you're more likely to have high-quality data, right? When you've got lots of organizational standards and you've got solidarity within the organization, you're more likely to have clear judgment. That's good for AI. Lot of those applications are going to be an administration, personnel, logistics, okay? Really turbulent, competitive combat environment, right? Or when you've got a lot of idiosyncratic local practices and conflict amongst different parts of the organization or civil military interactions. Right? Now you're in the realm of kind of purely human decisions. Then we've got that off axis category of places where automation could be very, very dangerous, right? And places where automation can be promising, but requires a much more complex relationship between humans and machines. Aren't even number 3. Those same conditions that we went through. They can help organizations do some of their tasks. Not all are task well, but that has the potential to start actually changing the nature of the problem. Okay? So historically, right, go back to our OODA Loop. A lot of innovation in the 19th and 20th century are in the area of mechanization. Write enable military forces to go further, right, to have better protection, to project fires. A larger volume and greater range. Okay, so there's the mechanization revolution. And what we know is that just doesn't fit. You don't have to look too hard at World War one and World War Two to start seeing that their headquarters staffs got really, really big, okay? Because the ability to act larger and larger scale created a huge information burden of where should you go, How do you keep these things sustained in the field, right? And that created this big demand for information. Okay? So the latter 20th century military revolution, right, is all about intelligence and communications, right? Getting better data, right, so that you can manage the increased possibilities that mechanization gave you. All right, so now we're on the cusp of this new pencil revolution, right? Which is looking at AI. The question is, is that a different revolution or is it part of this larger information processing movement? And that's kinda what I want to Are you don't want to put some data up here really quickly. So you can see that there has been this big historical change in the way that military organizations are our organize. Alright. I'm kind of a bumper sticker is mills or organizations have, have substituted information for mass. All right. What we're looking at here, right? The dotted, dotted lines are showing are kind of total enlisted and officers in the US military throughout the 20th century. And these solid lines are looking at the ratio of officers to enlisted. And that's a rough proxy between people that are involved in leadership and information processing and people that are actually doing the physical work. Okay, kind of compared the army to kinda the overall numbers. And you can see that even in the, in the manpower intensive forces like the army, right, you still have the same trend, okay, so increasing information processing, labor roles in the military, then we've got something similar, right? If you compare the major maneuver unit in the United States military going from World War One too, early Iraqi Freedom, right? You've got this decreasing what we call tooth to tail ratio. There are fewer combat units, more support units, a lot of the support units, our headquarters and information type support units, even where it starts to go up when you take when you have the contractors back in, then you actually get back down. So militaries trying to improve its tooth to tail ratio by getting more contractors and the force. But really you're just seeing more and more information processing. Ok. So, so I would expect that AI because it's going to be really focused on a lot of these backend sustainment information processing activities are probably going to result in a deepening of this already extent historical trend towards increasing information specialization. That has two implications. The first one is we're going to see more and more kind of obsession with the role of information in the military and strategic contest. Hey, this has been a big trend throughout the 20th century. Ai is just going to continue to deepen that obsession with OODA loops and decision cycles that are now increasingly distributed across a large complex organizations. Okay? We also expect staff officers to continue to be really, really obsessed with the sources, the quality of the provenance of their data, right? We all heard the old saw that amateurs talk tactics and professionals talk logistics. You open up the black box and military organization. You'll see that logistics also means intensive obsession with the kind of logistic details of D4 ISRCs, man control computers, intelligence, surveillance, reconnaissance, right? So, so kind of the engineering details of how these things work, right? Those become really, really big issues of concern for staff officer. Okay. Data relevancy, curation, cleaning become more important. Adversaries are going to be more motivated to start interfering and messing with data. And of course, that means that cyber sphere, cybersecurity will continue to be a concern. It's already large, but the more that you depend on data and the more you depend on the integrity of that data, right, the more the securities that is going to be important. Okay? So bottom line here is you can expect a lot more internal coordination issues. An internal complexity has organizations are dealing with AI applications, okay? That complexity means, right, not just that people are trying to get the data, but there's probably going to be more internal struggles about things like the goals, values, norms, and ethics of the organization. Why should it do what it does, right? Different military services, Army, Navy, Marines right there. Very different ideas about how you should fight and win a war, right? We went through this in Iraq and Afghanistan, right? Air Forces kind of have this very strategic kind of system centric view that you just have to take out a few nodes, right? The Army says no need to live amongst the people and drink three cups of tea, right? And win the hearts and minds, okay, like that's a, that's a, a distinction between what matters and how you go about doing it, okay, where and how you're going to use AI. But these collective action problems, we're going to start to become as you have greater institutional complexity. If you buy my story about human-machine teaming, right? It means that your individual operators, right, including very junior personnel, are going to have to be really, really smart, right? They're going to have to understand how their systems are working and where it is coming from and what the implications and second, third or implications are, right? There are really going to have to constantly being debugging the implementation and alignment of their AI systems. That's a tall order, right? You're asking some young junior officer, right? Right out of ROTC here at Tech to be both a cause Vinci ingenious, and understand what's going on on the battlefield. And also a talented enough hacker that they can reconfigure these systems as they need to on the fly. Okay? Civil-military relations scholars worry a lot about the politicization of the military, right? This is the idea that the military starts getting more and more involved in political affairs of extreme, that means having a coup. But it can also mean distorting the way that you procure and US military forces. And if you are asking all of your troops right, to start asking really, really critical questions about why they're making predictions and how they're using it, and how they want to integrate these things, right? Maybe, right. You will start to introduce some civil military issues. One reason why cu proofing is right, trying to keep your forces for masking any of those hard questions. Okay. In summary, because I want to definitely get talked a little bit about Ukraine and some of the implications there would have had these two visions, right? There is this very popular vision called the substitution theory, predicated on the idea that AI is going to replace human warriors. Ai will be used by everyone or is going to be faster, more lethal, and more decisive. And you're gonna have lots and lots of deterrence failures. And the complimentary theory of AI that I've just outlined. He's going to say, Hey, humans are going to become really, really essential for providing those key compliments. All right? The complexity of this complements are plaguing a limit diffusion. Not all states and nations are going to be able to pull off this trick. Okay, and as a rule, I would expect war to become more protected, confusing and ambiguous precisely because data and judgment themselves become risks, right? And so the legitimacy, the, the ethics that cohesion, and the strategic objectives, right, become the thing that are more and more important as you rely more and more on AI. Right? Now, deterrence failure, I think this is going to be totally unambiguous problem here because it was never about the technology anyway, deterrence is always fundamentally about how much do you care, what do you want and how badly do you want it and how much are you willing to risk in order to get somebody to do something or get somebody to stop doing something. All right? Again, fundamentally a judgment problem. Okay? So all of this has been very abstract, kinda been talking about future war. We have the most significant land war in Europe ongoing right now since 945. This is terrifying situation rife with all kinds of potentials for going bad and any number of ways. And perhaps most urgently right, is just kind of incredible suffering of the Ukrainian population intensifying by the day. And the question is, where could Ai be used? Where is it being used? Where could it be used and where could it not be used? And are those the applications that are driving the war, right? Or is it good, old-fashioned miscalculation and uncertainty and bad political decisions that, that are, that are driving that pipe. Now, again, we don't think about this as an AI war, but there is a lot of AI that is involved right now. They aren't the autonomous lethal killing machines, right? Of science fiction, right? It's kind of a lot of the back-end activity that is enabling a lot of the supporting activities that are already kind of influencing or being influenced by this fight. Okay? Most obviously would be the information conflict. All right, on the kind of the signal things of this war so far is that Russia, despite being, having this reputation as a Master of Information Warfare, is getting its butt kicked by Ukraine. Ukraine is able to represent itself justly so, right, as the victim, it's creating sympathy and cohesion around the world. Nato is more unified than it's ever been. Western nations are more resolved to impose sanctions that are hurting us. Write much as the Russians were heard and Russians more. But the information fight right is helping Ukraine and to the extent that, that relies on the news and social media and Twitter, right? And you're finding about it, right? There's a lot of AI already that's involved there. Okay. I put this wonderful kind of heartbreaking picture up there. It's a staged picture, right? To fake picture. But it captures a lot of these themes right? Here's a young girls, He's nine years old. She's sucking on a lollipop. See, she'd be in school, but instead she's got her father's shotgun right waiting for the Russians to common. And there's lots of images like this that have really, really helped Ukraine get the edge in the information conflict. Okay? The financial dimension, right? Trying to figure out what sanction should work. How do you protect your capital from those sanctions, right? Tons and tons of AI applications. Their bottom line is that these AI applications that are already in play, right, are in play not because their military applications, but because they're part of the regular global civilian economy and to the extent that you're going to mobilize a global civilian economy in support of or in reaction to war, right? You're going to see AI. Ai in the Lockean world rather than in the US Betsy and world. Now, there's places where I don't narrow, but I assume that we're seeing AI, right? A couple of ideas, what's going on and some of those seeker heads, right, there may be some kind of interesting ML applications, their Western intelligence agencies, or certainly using a lot of AI in their analytical process. And some of the output is being shared with Keith, right? So there may be an impact of AI as well. Commercial cyber defense, okay? We've also seen Microsoft right getting way out ahead of these threats in many cases. Okay, there's some AI defensive applications. Well, these are interestingly defensive applications. That's not the story we usually tell. Ai advantaging the offense, okay. Also, these are AI applications that are prolonging the war. Why are they prolonging the war? Because that's key to the Ukrainian strategy. The more that the Russians are in the field with extended supply lines, more that they're going to absorb costs, the more solidarity there's going to be to impose sanctions. Okay, So right now time is working against the Russians, right? So what kind of counterintuitive things that are going but will make sense if we start to think about where AI is actually being mobilized. Future applications right now we're totally in the realm of speculation that never stopped to political scientists before. So I'll just go ahead and jump into logistics and administration, right? I said this is probably the easy case. But even in this easy case, look at what's happened to the Russians, right? If you really have your act together trying to pilot and mobile offensive, it's really, really hard. Okay. I'd refer you to United States Iraq in 2003. Lots and lots of things go wrong were amazing at planning logistics still are forces outran supply lines. There's lots of improvisation going on. Let alone a force that doesn't have the experience, doesn't pay attention to logistics as much. Hasn't told its soldiers what kind of war that they're going to get into, right? But we use it, it's going to be greeted as liberators. Suddenly he's up against stiff resistance, okay? Just-in-time suppliers and more efficient personnel management are not going to offset those basic strategic mistakes. All kinds of applications that we can imagine in fire and maneuver guarantee you that if you get into this world, defense contractors are going to be showing us pictures of those Russian armored columns. We're going to see lots and lots of advertisements about how swarms could have decimated those. Okay. Probably don't get people exposing those targets if they know you had AI, but we'll put that aside. Your automated systems right, are still operating in the same environment that is doped with surface to air and surface to surface munitions. They're hidden in the forest. The weather is terrible, they're hard to find, right? Things are still breaking down. She's still got kind of a complex combat environment. And any of these machines that are making mistakes and killing civilians, right, are still kind of enraging and encouraging Ukrainian. So I'll say about intelligence is yeah, there's lots of ways that AI probably could've improved intelligence one way or the other. Western intelligence was pretty good in predicting what was going to happen. Although the West also underestimated Russian performance significantly and an overestimated Russian performance and underestimated Ukraine. So it could have gotten marginally better there, but you still would have seen the same issue, but better intelligence, right? This wasn't Vladimir Putin problem, right? That information was available, that analysis was available. Even I even got frustrated. Fsb intelligence officers say like, you know, we did this work but nobody asked us, nobody wanted us. They always to shut up and they still right. This is a very insular regime that was drinking Kool-Aid and had all of these assumptions about what Ukraine wasn't, how the war would happen, okay, better intelligence is going to help if your leaders aren't interested in that intelligence. Likewise, on the cyber side, okay, seemed very literal Russian cyber operations. And the best reasons to guess for that is that they didn't plan on it because they didn't need it. They tap there, you just walk in and use Ukrainian infrastructure. Again, your AI fabric capabilities, if you don't plan to use or don't use them, aren't going to change the outcome. I think the bottom line here is that the hardest questions right now when we're looking at what happened when we were thinking about the run-up to the war did not have to do with what was the technical balance of power. They were these ineffable, very difficult questions about How much are the Ukrainians willing to stuffer at? How much can they put up with, right? How many civilians is Ukraine willing to lose and keep taking the fight to the Russians? How much is the Western world willing to, how much risk are we willing to absorb? Are we willing to run nuclear risks, right, in order to put up a no fly zone. These questions, again have less to do with the technical feasibility and more about how you think about your relative goals and values. Because Mr. put want, how bad does he want it? What is he motivated by? Right? At what point, right? Well, the Russians be able to say enough is enough and declare victory in some unbelievable way and go home. Okay? Again, these are questions that you can analyze but are ultimately about values. Okay? So pauses, it's taken so long. Conclusion, ruin, don't panic. I don't think AI is going to change the world. It's like to transform the nature of war. Technology never has and will continue to be important. In fact, even more important, not less, okay? And I think that because of this complementarity, we're going to see more complexity in war rather than less. Let's not be complacent about it. Write that story that I just told you had to do because organizations cared deeply about how they're using these technologies, right? Figuring out how to do that human-machine teaming, right? Figuring how to manage the complexity that results in figuring out how to interact with constant innovation. On the other side. I think there's also some importance when we're thinking about designing these systems or educating our Egypt thinkers, right? That we want them to focus not just on the technology, but also on those more humanist aspects of how and why technology is going to be used. And I think these questions of ethics values, morals, curiosity, more and more relevant at the lower and lower level. And as you're going to expect people to start having a stake in how and why their systems are working, not just carrying out. And I think we pulled this very simple story about judgment, but kind of future research issue, if we start to disaggregate judgment into all of its different components, I think we're going to see this same logic for complimentarity start to come out. Alright? I know that we are at the bewitching hours, so I guess that's dad's. But if anybody wants to hang around and ask any questions, Let's try speaker. Absolutely. Outstanding. Talk. Questions We have time for maybe just a few short ones. For John, you may know that Eric Schmidt and the mother, big name military guy that I can't remember the name of started a national commission about AI in the military and they were promoting something called a Supremacy. Can you tell me what you think that means and what it might contribute to us our though the, I guess my cynical answer is, is that that is a way to make sure that there's a lot of investment into this area. And the Chinese are very similar word intelligence digitization, right? Churn on PWA, right? It's impossible to translate. So this is kinda the American version of this white, both, both sides have decided that AI is the teacher of war. And so, you know, the best way to sell that is to say that this is going to be the one thing from and everything. I think that's really, really unlikely. And I think that when you read that report, it understands very well that human and organizational capacity and human capital is really going to be the thing that makes any of this work. But the top line, right tends to emphasize the technology a little bit. Or if I do worry about kind of underinvestment in places where the United States really has an advantage over China, for example, right, which is in the human capital and its military organizations are a number of reasons, right? And, but there are also reasons to be worried about squandering. That those strengths as we kind of go more and more towards kind of pacing the technological tiny object. Excellent, Other questions. Great talk. You talked with machine learning and what it can do for us. But futilely talking word conflict and adversarial machine learning in some sense that we did in terms of robustness, how brutal it is, easily fooled when you actually fight wars with it. So if you're talking about like when you actually have, you know, I mean, I know there's a big research area and kind of cyber where you're trying to automate both sides. You know, and, and they can be attacking reward functions. They can be attacking data and judgment. I think this is kind of, it's an active research topic right now. But I think that those are going to be the same parameters that you're going to ultimately want to look at. And as much as we tried to proxy judgment and proxy reward, right? He never quite get away from some designer needing to make a decision, whether they're doing reinforcement learning or supervised or unsupervised, right? There's still at some point where somebody says, this is what matters and this is the action that you take now at this point. And I think until we can, if we don't get away from that than these considerations are going to. And if it's brittle and yet you're using it in the highly sensitive situation, then it's going to be very unlikely that, that organizations are actually going to trust. That's it. That's the key thing, that, That's why it's like that box of human-machine teaming, like that's where all the action is. Because organizations, it can be very unwilling to turnover mission critical functions to something that they don't understand or that doesn't work. And it's exactly the fear of brittleness, right? Which maybe in some ways keeps that from not actually being able. To open topic though. We had two questions from online. I'm going to try to combine them into the same topic. So game theory has largely been used for military predicting military thing. So how does the use of AI then come into this? Yeah. How do you combine those two, right, the tool of AI with game theory. Similarly, kind of related to that question, how does fiber offensive play into the gaming out of these scenarios within the military? Yeah. Great, great question. The answer to the first one is, there is this kind of ongoing debate about the relevance of game theory in thinking about strategy. And the fact that that is so fractious and unsettled should be a cautionary tale for anybody that wants to kind of just say, forget the game theory, right, and have the machines do it like then we're going to get to the right place. We've got game theory of nuclear deterrence. But now we're finding out through the new historiography that nuclear deterrence in the Cold War didn't work the way that game theory said that it was, and that it should, and not necessarily because it was irrational, right? There were other factors and other interactions that weren't explicitly modeled. All right, that tells you that you could probably use game theory in this operational support mode to look at different strategies in different kinds of strategic interactions. And maybe that might help to open up some options, risks, burns that operational planners hadn't considered. But that's kind of AI Game Theory tournaments in support of strategy rather than executing it. I think, I think there's, there's rich potential there, but, you know, the broader issue is full of kind of flashing red cautionary lights. On the cyber side, I guess my answer would be a little bit the same, right? I mean, there's also this incredibly active debate about what the impact of cyber warfare on warfare actually is. And this particular conflict right, has been yet another data point in kind of 20 years of data points suggesting that's the impact of cyber warfare may have been oversold, right? That what we're actually looking at are kind of marginally supportive mechanisms or things that matter most in time or a gray zone, intelligence conflict rather than in time. And if you actually go high-order and you've got high intensity ground combat, that for the war, right? Ever has kind of a sideshow. Though. Again, if, if we make the sideshow better, does it become a less of a sideshow? I think that's part of the larger Tiber debate, rather than kind of optimizing the cyber warfare today. Answering two questions at the same time. That's why your professor John Lindsay, thank you so much for this outstanding presentation. We're out of time. If you have minutes, you stick around. Maybe we can ask some questions on one-on-one, but thank you everyone see you same time, same place. Next cyber lecture series.