[00:00:05.13] DAVID GRIMM: Professor Woods, professor in Department of Integrated Systems Engineering at the Ohio State University and PhD from Purdue University has worked to improve system safety in high risk, complex settings for over 40 years. Studying human coordination with automated and intelligent systems across a variety of applications, he began in 2000 to 2003 in response to several NASA accidents. [00:00:30.12] And his books include Resilience Engineering. He has come up with the theory of graceful extensibility and the SNAFU Catchers Consortium to build resilience in critical digital services. And the results of his work on how human machine systems succeed and sometimes fail has been cited over 38,000 times, H index greater than 92, across a series of books, including the two book series, Joint Cognitive Systems. [00:00:59.64] He is a past president of HFES, past president of the Resilience Engineering Association. And has received many awards, including the Ely Best Paper Award, Craft Innovator Award, among several others. He provides advice to many government agencies, such as the US National Council on Dependable Software, US National Research Council on Autonomy in Civil Aviation, and among others, and was an advisor to the Columbia Accident Investigation Board. And finally, he is a veteran of over 40 years of battling zombie ideas. Thus, without further ado, Dr. Woods. [00:01:37.62] DAVID WOODS: So our topic today is appropriate [INAUDIBLE] and for recent events in the news this week. How to kill zombie ideas. Now, the kind of zombie ideas we're talking about is myths about the relationship between people and technology. Why do people tenaciously-- next slide, please-- hold to these ideas? [00:02:05.39] So what we're going to do today is a radical reframing. I want you to start the process of learning to see differently, cut through the smoke screens and barriers that the zombies create, making you think that certain things are perfectly reasonable, when, in fact, they are both empirically wrong, logically wrong, and theoretically wrong. And they are wrong in ways that are important. So there's many words we could be using, AI, autonomy, the buzzwords change with generations of technology and the way they're marketed. [00:02:45.19] We'll refer to automata or autonomous capabilities, it doesn't really matter what word we use. When people push the machines can do things by themselves, and take over from people, and that will always be a better future, they make a claim. And that future claim, actually, has already occurred many times over, been here a long time. And has not now, nor ever, worked as advertised. [00:03:19.13] And the gap between these projections or envisionments and the reality, when we go study what happens when autonomous capabilities are deployed, often has deadly consequences when it occurs in risky situations. Now, right off the bat, notice I said we're going to do a radical reframing. Reframing, what's that? That's a pretty unique human capability, that we can reframe. [00:03:47.20] It's pretty hard to do. And often, we don't. Or we can get stuck in stale models or perspectives on the world and what we interact with. But we can do it. We can reframe. Machines can't. And people don't even try to build machines that could, which is very interesting way for us to start. Next slide. [00:04:12.60] Now, we're going to start out-- and I said there was a gap. And the gap can have deadly consequences. Let's go back. I could go on for hours about these. I'm going to do one quick, simple one. I went back to 1988. And I was asked by NASA, the FAA were there, Boeing was there, as we'll see matters in a moment. And I reviewed, synthesized all the results up to that point about interaction of people and more autonomous capabilities that get deployed into risky systems. [00:04:45.69] And so here are some of the kinds of quotes from that report. Designers will be technology driven and build the automation that makes the most sense in terms of cost and use of the technology. The cognitive consequences are rarely considered. Brittle machine performance due to designers' overconfidence bias. Data integrity is also a major issue. Tremendous effort had to go into assuring that the automation was operating on correct data. [00:05:18.95] Obviously, there was a manual backup. But there were no provisions for supervisory control, no mechanisms to understand or track what the automatics are doing. In fact, the operator received barely any training or information about the automatics at all. No mechanisms to direct it, effectively. It was only a choice, but the choice between fully automatic or fully manual must be avoided. We'd already seen by '88 that that architecture, either or, was fundamentally flawed. [00:05:52.18] The greater need for training was not anticipated and required a bunch of scrambling. All these things there. Next slide, please. Let's roll forward to 2018 and 2019. We had two crashes of the Boeing 737 Max. And they were based on runaway automation called the MCAS system, triggered by bad sensor input. Remember that sensor data integrity, well, they didn't really need to invest a lot, because they could just run one sensor signal, one sensor channel to the automatic system. [00:06:27.84] 346 deaths in those two crashes. Everything that I said in 1988 was true of how Boeing shortcut the engineering design, in terms of safety, reliability, robustness, and resilience. They hid the MCAS automatic system from the pilots. There was no training. There was no practice on anomalies that could arise, no supervisory control mechanisms. Fully automatic or manual, a reversion to the previous way you control that axis of flight. [00:07:03.45] It was a brittle system, with a trigger from bad input. The automation misbehaved. It was strong and wrong. Actually, that warning goes back to 1950 and Norbert Wiener, who actually developed many of the kinds of computer science and automation, control automation background that led to the wealth of automatic systems we use today. He warned us about these kinds of issues that 70 years ago, or 70 years ago, unbelievable. [00:07:35.23] And in the aftermath of the two accidents, there was an intense effort to transfer blame to pilots. And this is a classic thing when things go wrong. Why didn't they stop the automation from misbehaving? Why couldn't they take over successfully? 346 deaths, classic findings ignored, yet again. Next slide, please. [00:08:00.18] Or we could do this other places. We could do this all day. Runaway automation in financial trading, half a billion dollars in losses, and an effectively bankrupt company. Next slide. We could talk about it in the operating room, and infusion control devices, we could talk about it with radiation treatments and five deaths in Therac-25. We could talk about this over, and over, and over again. But that's not what we're here for today. [00:08:28.63] We are here to understand zombie beliefs that are seemingly immune to this confirmation. That's what makes them a zombie. Right and after Fritz Heider, in the '40s, psychologist, and Joseph Weizenbaum, someone who escaped the zombies, technology change is a human story. What people see reflected in technology rarely is how the technology works, or the new challenges that arise from its limits or how other people seeking advantage will hijack their storyline. All stories of technology are human stories. Next slide, please. [00:09:12.10] Just think about it for a second with technology, and even just the news that you hear this week in hearings in Congress. What is connectivity? It's technology, how I'm connected to you. But it's new forms of interaction between people. Hearings in Congress about social media and harm to children. What is sensing? The new sensor technology that powers so many things, it's new sources of information to people. [00:09:41.39] In fact, that sensing also means some people may take advantage or get an advantage that is a disadvantage to other people as in issues of privacy now. Technology is just another means for people to seek advantage, which ends up inadvertently, or intentionally, putting pressure on other roles that people are in. Now, yes, we have to have automation, and autonomy, and various things, because otherwise, none of this stuff would run at scale. [00:10:14.75] As we seek advantages, we extend our reach, our capability for some roles. What's the impact on other roles? And so as we push advantage, no matter how it's planned, embodied, and deployed, inevitably gaps, surprises, exceptions, new challenges, new arise. What drives new gaps, and surprises, and exceptions? It turns out success. You provide value, so people adapt to take advantage, at least from their perspective, and their goals. [00:10:49.19] Whatever the technology, whatever the belief about what its impact will be, messiness reappears. And people adapt to fulfill the shortfalls. That's why you'll see SNAFU Catchers as our slogan and logo. That situation normal, all f'd up, its normal. Messiness is normal. And the human role of catching snafus matters. This should be self-evident. The technology is always in service to some people's purposes. Next one, please. [00:11:27.54] So let's summarize all this stuff. There's a single line that does it. Stories of technology change, describe, or envision the new forms of congestion, cascade, and conflict that arise when apparent benefits get hijacked. All there, when you make-- when you insert new technology into ongoing worlds of human activity, what happens that matters? Yeah, there's some things you think are good. And they'll get hijacked in the process. And because of the messiness and the gaps that inevitably arise, there will be new forms of congestion, cascade, and conflict. Next slide. [00:12:11.46] Now, think about that. Congestion is where does overload occur. And how does it spread? Where do capabilities get saturated? And we're going to hit this over and over again today. Limited resources, finite resources are a fundamental hallmark of this universe that is inescapable. It may get relaxed a little bit at times. But finite resources is inevitable throughout the universe we live in. [00:12:38.85] Now, we seem to want to live in a universe where that's not true, which would be nice. But it's not here. Cascade, interdependencies grow as we pursue services that provide value to different stakeholders. Notice, all the words are about people. And those effects spread and compound across the extensive and hidden interdependencies. So when things go wrong in one place, they can move quickly. Effects at a distance can complicate diagnosis and bring many parties and stakeholders into the process of understanding and responding to the trouble at hand. [00:13:19.50] Conflict, different roles experience different consequences for their goals. What do we see often? Risk shifting, load shifting, risk dumping, we'll come back to this. No matter what, when you inject new technological capabilities, it's not the same system with some performance improvements on some criteria, it's a different system. It's a transformed system. Risks change, exceptions arise, resources are squeezed, change continues, surprise occurs. Next slide, please. [00:13:53.77] All right, this is the stage where we go, warning, Will Robinson. Pointing out that zombies are, in fact, zombies, that these myths are common and wrong and have been resisting reasonable efforts at disconfirmation is very dangerous. Listening to this, much less participating in highlighting and fighting off zombies can lead to anxiety, confusion, and even the urge to defend zombies. Next slide, please. [00:14:31.17] So that happened this week. It happened this week. I had to add this slide in. One of the leading researchers in human autonomy, human AI interaction, Dr. Mary Cummings, Missy Cummings at Duke, is being considered to be appointed as a safety advisor to the National Highway Traffic Safety Agency, NHTSA. And Tesla, Musk, and Tesla fanboys went crazy, with attacks on bias and poor science, and all kinds of things. Notice, I also put up the issue of the various kinds of accidents that have occurred with Tesla, such as they seem to have an ability to run into police cars on apparently more than one occasion. Next slide. [00:15:30.34] All right, so where are we? A psychological conundrum, why does this happen? And when we go through this, remember, we go back to the 18th and 19th century. This isn't a new thing. This is very old. In fact, this arose as soon as we had mechanical automata, where people started to over attribute intelligence and capability to what were simple machines. Machines that simply worked as programmed and designed, even if we didn't understand, because we could create such complex machines, even though we didn't understand how it would behave in a wide range of circumstances. [00:16:12.65] Something that we confronted in the first wave of AI, ridiculous hype, back in 1980, '81, et cetera. Next slide. So why this psychological conundrum? Well, the simplest way to understand that is certain kinds of linearization, oversimplification of complex systems. And we could go on just on these for a long time. Though you'll notice most of these are not on the list of decision biases. [00:16:47.25] Saying the AI, the AI can, the AI does, the AI will, whatever, commits the reification fallacy. Increasingly autonomous capabilities are not things. They are complex networks of multiple algorithms, control loop sensors, and human roles that interact over different timescales and changing conditions. If there's a tangible platform, some of the network will be on board. And some will be off board that platform. [00:17:20.11] When self-driving cars have software changes driven into the vehicle overnight or whenever, we're indicating that the system in question includes those off-board people and capabilities as they update software. Saying we can substitute an algorithm, a machine, for a human role and saying the system will not change, though it will perform better, commits the reification fallacy. [00:17:51.91] It's a different system as we illustrated in one of multitude of possible cases, studies, and worlds where this is happening. Deploying increasingly autonomous capabilities in ongoing worlds of human activity produces more tangles of interdependencies. Complexity grows. Essentially, as we get growth and capability, autonomy is a capability, not a solution. As we get more capability, we get offsetting complexity penalties. [00:18:22.21] Now, we could respond to those offsetting penalties, which would require a different way to think about how you would deploy, and organize, architect these larger scale systems. But we try not to. Let's just push more machine capability. Next one, please. Next slide. [00:18:50.62] So we have the creeping complexity penalty, for example. What happens when there's a problem identified on a system that relies on autonomous capabilities? Add more sensors, more algorithms, more computations. There's no limit to the complexity of the interacting interdependent computational components necessary to deploy and continue to maintain and improve autonomous capabilities as they contact the real world. [00:19:21.32] Now, in this process, there's a set of fallacies that arise. Can go all the way back to '49 and say, oh, it's kind of a Ryle's category mistake. You look at the collection of pieces, the components, and subsystems, as if they can express what emerges in terms of the whole. That there are patterns, and behaviors, and emergent properties of the whole that are not expressible in terms of the characteristics of the component. And if we think if we just know those components, we know the emergent properties of the total system is a mistake. [00:19:59.80] And it's a mistake that we often see an accident analyses, for example, of highly autonomous systems when they get into trouble. We do the component substitution fallacy, where we say some component of the system had a weakness. Well, guess what? Finite resources means there's always components with weaknesses. There have to be, because you don't have enough resources, time, expertise, et cetera. [00:20:23.32] There's trade-offs you have to make. So of course, you'll find component weaknesses. That is irrelevant to-- it's nice to fix, but irrelevant to understanding the emerging properties of these complex systems as a whole. And we could go on. Hindsight, go back to the earliest stuff we were doing on why complex systems fail. Overconfidence bias, you think your ability to outthink the world, and the exceptions, and anomalies, and surprises is much greater than it actually is. More of them are going to happen. And they're going to come at you from surprising directions. Next one, please. [00:21:02.20] What's a simple example? During the pandemic, the English couldn't run the tests, the standardized tests for admission to University. So what did they do? They said, well, an algorithm can predict what people would have scored on the test. And chaos ensued. Think people and the grades their teachers and schools predicted they would achieve from all the enormous effort that went in to prepare for these critical exams, as to what University you might be able to get into, and what field you might be able to study. Absolutely critical to those people. [00:21:44.73] The algorithm did things very differently, as the picture shows here. Country-wide chaos. And if you aren't familiar with the last 1,000 years of English history, you might not be aware that the algorithm reflected that history of class bias. And people from nontraditional pipelines into major universities and fields of study were rejected in unexpected ways. So people from poorer backgrounds, people of color, people from nontraditional schools that fed. And of course, the traditional ones are all from the elite class. So it was quite the show. Next one. [00:22:28.11] So here, linear simplifications actually come from some cognitive psychologists. Paul Feltovich Vic and Rand Spiro, if you can read their papers, they're classic. They're quite old. If you don't know them, you really, really should. This chart is-- or the write-up of this chart, because I have to simplify for a slide, is the only thing I ask my students to memorize. [00:22:51.27] The oversimplifications, ways that we treat dynamic, continuous, interdependent, heterogeneous, nonlinear, conditional, irregular systems as if they were static, discrete, separable, sequential, isolated, linear, routinizable. The right hand side is a widespread set of assumptions that we can get away with. But we can only get away with them under very limited circumstances for a very limited time. [00:23:18.76] And so these are the zombies. These are the zombies that get in the way of understanding and designing systems for human purposes that take advantage of new capabilities in ways that will reasonably balance the new benefits, who gets the benefits, and who and how they get other-- roles get risks and extra burdens. Next one, please. [00:23:51.14] All right, so we've got zombies. They are empirically wrong. They are derived from linear simplifications. But they're not a bug. They're a feature. They are not going to go away. And they haven't gone away with-- we can run through decades of this isn't the way you should do it. And when you do it again the same way, we get the same kinds of findings. [00:24:22.54] We replicate them again and again in real world deployments of new kinds of autonomous capabilities. Why are they a feature? They're persistent because they provide value to people. Again, it's a human story where the zombies come from. They assert some people's goals over other people's goals. Look at what went on with Tesla reacting to this appointment of a researcher. [00:24:54.22] One of the issues is that the next one here, not actionable, the zombies can be done in a way that gives the appearance it could lead to some improvements down the road. But in fact, they're not really actionable. They don't disturb-- don't disturb my efforts now to develop and deploy the technology for my purposes. So Tesla got mad because not because-- Missy Cummings has been doing research on this for a long time and in different settings, with drones-- and often, the advantages of new capabilities and how to put them in an integrated system. [00:25:32.18] And what happens when she gets appointed to be a safety advisor to a government regulatory body, they go, wait a minute, they might do something to us. We'll see in a minute what the zombies howl. And of course, the zombies are valuable, because when things go awry, they provide a defense after the fact. Blame the other people in other roles. [00:25:57.98] Notice it's always they weren't careful enough. Not I'm not careful enough when I'm in the role of developing and deploying the technology. And of course, they-- over my 42 years, they keep replaying the same architecture, or deploying these capabilities in relation to people, and stakeholders, and valued services. They replay the same line. And each time, what do they say? [00:26:24.00] A little more technology will be enough this time. Really? You know when I first wrote that in a paper? 1987. This is not new. But that's the power of the zombies. Next one, please. So what do zombies howl? Luddite! This week, Missy Cummings got attacked as a Luddite. How many times? You are slowing inevitable progress. The Luddite slander is common. And that's what we face when we try to fight off the zombies. [00:27:00.90] And the problem is systems engineering is trapped by linear assumptions. And they do that for tractability. So you can actually try to build something. They keep assuming things are mostly independent, and that you can work on them separately, and then put them back together in a way, and check for interactions. And they won't be too difficult or too hard to find. And you can make it all work. [00:27:23.69] So the Luddite slander feeds back on engineering and actually makes the engineering of systems and complex worlds not likely to succeed. The Boeing example at the beginning was a breakdown in systems engineering. And it derived, effectively, from the zombies. The engineering of integrated systems, given the complexity penalties that come with growth and capability, is defeated, and slowed down, and sidelined by the Luddites slander. Next one, please. [00:28:05.28] Then what do zombies howl? It's human error, the system worked as designed. Can't account for erratic people. We actually tabulated all of these kinds of things in the '90s, in reactions to-- of organizations when their automation was a contributor to a fatal accident. In fact, what really is going on is failures due to brittle systems. And ironically, the systems tend to work, not because of the design or the technology, but because some people provide the ad hoc source of resilient performance. What we'll call resilience as extensibility, because resilience has now been used in so many different, and often contradictory, ways. [00:28:55.36] Resilient performance at the boundaries. Remember everything is limited. There's boundaries. And there's surprises at the boundaries. How do we extend performance at the boundaries? Turns out people do, some people. Why? Well, that's another story. And how old is this one? My mentor, Jens Rasmussen, in 1981, the operator's job is to make up for the holes in the designer's work. The only difference now is we can show you-- we can prove to you that the designers, no matter how good you are, no matter how smart you are, will have holes in your work. Next one. [00:29:34.18] Failures due to brittle systems, because limits are universal. And limits are universal, because of finite resources and continuing change applies everywhere. Now notice in the zombies, when they howl, they're howling that people are limited. Therefore, more automation and more automation is always better. But automation and the systems that develop it and deploy it also has limits. [00:30:00.12] And again, ironically, those limits are partially derived from the fact that when you provide valued capabilities, you trigger cycles and spirals of adaptive change, because people have to take advantage. And because when gaps appear, people adapt to close the gaps. Remember, every time we say technology, we're back on people. [00:30:22.19] So when you deploy autonomous capabilities, it has a window or an envelope where it is competent. Emilie Roth and I worked this out in the '80s. But it's going to be brittle at the boundaries. And these are, today, still uncontrolled risks for deploying autonomy anywhere and everywhere. [00:30:42.92] I brought this up at a moral algorithms meeting a few years ago with some of the leading AIers there. And I also brought up the reification fallacy on AI. Stopped them for about 10 minutes before they went right back to committing it. But this idea that these AI capabilities are brittle, and they all paused again. But their rationalization has always been that was the last way we did the technology, the last version of the algorithms. Not this one, this one we can show are better than the previous ones. Therefore, brittleness can't be a problem. In fact, next one, please. [00:31:24.38] In fact-- oh, I'm sorry. Go back one. Not quite ready yet. In fact, all of the systems are brittle. It's a fundamental risk. And that's why people become the ad hoc source to extend or stretch performance at the boundaries. Guess what? How does biology anywhere or everywhere deal with this constraint? Because it's actually a universal constraint. [00:31:49.13] And the answer is it's build systems that are poised to adapt. No matter what scale you look at it, no matter which system you look at. We can go all the way down to cellular levels, glycolysis, we can go at bone. We can look at all kinds of biological examples, including neurophysiology, how the brain works. And it's about being poised to adapt. It selects for future adaptive capacity, relative to the time and scale of biological systems we look at. [00:32:18.29] So biology abhors competent, but brittle systems. And it supplies something extra, so to overcome that fundamental risk of brittle collapse. And of course, the accidents like the Boeing 737 Max MCAS system accidents are examples of brittle collapse. Or the runaway automation financial disaster was a risk of brittle collapse. Or the Texas energy collapse in February of 2021 was a brittle collapse. In fact, it was a deliberately, intentionally designed system of maximum brittleness. It was really quite remarkable. Next slide. [00:32:59.05] So let's practice. Can you expose zombies? So here's a two-word phrase that is highly popular, explainable AI. And it's an oxymoron. You see that? I mean, It's popular. There's millions of dollars being spent on explainable AI. Well, first, did you know it failed completely the first time in the '80s? Completely. Did anybody look at it? Did anybody write down that it failed? [00:33:30.46] Well, I was there. So I wrote it down. Because the zombies don't want you to know that. That would make some zombies disappear. They don't like that. Well, two, it commits the reificiation fallacy, treating AI as a thing. AI is something that is already an integrated system that can work and be deployed in the world. And you're like, no it's not. It is a capability that can be integrated into a system, with a whole bunch of other stuff, that network of on-board and off-board capabilities and human roles. [00:34:07.68] Oh, three, it makes real-time activities impossible. Why? Because it can't keep pace with events. Things change over time. Events happen, and move, and go in different directions. And you have to change courses of action. Tempos of operation are critical. Explainable AI means you're taking a break from real time. Let's sit back and talk about this for a while. It doesn't work that way when you're trying to control real, risky things. [00:34:34.90] And think of the pandemic. There are a whole bunch of layers of society, of jurisdictions, of hospital systems that had thought they were outside of the tempo of operations in an emergency room or an ICU. And all of a sudden, they were part of that clinical tempo. And they couldn't keep-- they struggled to keep pace with events. Explanation, by the way, isn't about an isolated agent. It's about at least two agents. Actually, it's just a tiny, tiny, tiny aspect of joint activity. And so we're mistaking the category, again, mistaking a piece as if it's the whole. [00:35:14.66] And I could go on, and on, and on. It's the same limited architecture that we have seen all the time. We're stuck on it. And it has a low performance ceiling. As the rate of exceptions, anomalies, and surprises go up. And we're constantly underestimating the rate of occurrence of anomalies, exceptions. [00:35:35.47] Why do we do that? In part because people adapt as the source of resilient performance. Hiding, that's called the fluency law, hiding the difficulties handled and the dilemmas balanced. Well, no one sees those difficulties, don't see those gaps. Don't see the dilemmas that plague operations. Next one, please. [00:35:58.25] Another zombie in disguise would be ethical algorithms. Now this one, I mean, this is so oxymoronic it's kind of beyond belief that people can say this with a straight face. What's the whole point of the talk? All these stories of technology, what's the point of my predecessors, Joseph Weizenbaum. Go back to Fritz Heider. Go back to Norbert Wiener. Go back-- we can go back to all these people. [00:36:24.13] Technology changes are really human stories about conflicts, cascades, congestion, et cetera, et cetera, et cetera. The algorithms aren't systems. Somehow they're going to be ethical or not go back to the example of the testing for university admissions in the UK during the pandemic, pretending ethics can be in algorithms is a retreat from responsibility for considered judgment under risk, trade-offs, and uncertainty. That comes from Christopher Alexander 40 years ago. [00:37:03.79] Oh wait, back, one more. And the last one is what's particularly galling about saying ethical algorithms is actually it's a pretense for massive risk dodging and risk dumping on other people. That's what it's used for. Again, the zombies provide value to other people, to some people. And the value is I can dump all the risk on you. And I'll get the benefits. Next one. [00:37:41.30] All right, breaking away from zombies. What's my experience? Well, my first decade I thought, if we just study this, people will realize-- I didn't even realize they were zombies. And then I kind of knew they had some zombie-like qualities. But I thought, science, we study these things. We observe. We extract patterns. We pattern find about how this works. [00:38:04.11] The phenomenon of interest are in the world when you deploy these new technologies. So when we study it and find these regularities and patterns, then people would use them to design better. And we did designs. And we did techniques to design and all kinds of things to make it straightforward. We even really designed things. I actually designed a controller not using any zombies, using the opposite findings from the zombies. I thought they would fade away. [00:38:35.28] Boy, was I naive as a young person. And so by the late '90s, I was starting to give talks as Woods Watching People Watch People at Work, or four W's. W to the fourth power. And what I realized was that they're not going to fade because they're wrong. They give value the people in rivalry and pursuit of advantage. They'll fade because they're irrelevant to the real issues people need to handle to make human systems work, systems that serve human purposes at increasing scale. [00:39:19.66] Now, it will also-- they will also fade when we actually do the science, we escape from the mists and fog created by the zombies and go behind it. What science is supposed to do, dig in and say how do things really work. What are the fundamentals? And the fundamentals, surprisingly, rise above our standard disciplines of inquiry. They're about the adaptive universe and how it actually works. [00:39:47.96] And the adaptive universe, you can think of as the full range of biological systems, including all human systems, since we're part of the biological. Not the physic. It's the world of biology and the world of human system. And that science base, which by the way, has been emerging, and emerging more in the last five or six years, but emerging over the last 20 years at least, that science base leads to new ways to design systems, outmaneuver the complexity curve and accompany growth in capability. Next one, please. [00:40:24.84] All right, so let's run through this. When you try to deploy autonomous capability into complex and risky worlds, and of course, notice with social media, what didn't seem risky is enormously risky now. Disrupting societies, disrupting political campaigns, massive disinformation campaigns, all kinds of things that are in the news all the time. And so that deployment process is hampered by the brittleness. [00:40:56.83] And descriptively, that's a sudden collapse in performance when events challenge system boundaries. And that's different from how well it performs far from the boundaries. So there's two regimes of performance. And it's downplayed, as I said earlier, on the grounds the next advance in AI, or autonomy, or algorithms will lead to technology that escapes from these brittle limits. That overconfidence, all of these oversimplifications, denial of the underlying science results, that technology change can overwhelm those. [00:41:30.28] And so what do we find from the space shuttle accident, and MCAS accidents recently, this year's Texas energy collapse, on, and on, and on, more and more examples of the risk of brittle collapse. Next slide. All right, so complexity penalties arise from the development of deployment of technology with new capabilities. They limit performance, increasing the risk of brittleness, brittle collapse. [00:42:01.17] The current methods for developing and deploying new technology can't address these. And so there's new science on adaptive systems. And it is if you understand it, which is hard, because it requires you to know about things from many, many different disciplines. You can build pragmatic systems that can outmaneuver complexity. And we have examples. We have biological proofs of principles. We have human social systems, proof of concept. We have Elinor Ostrom's Nobel Prize winning work. [00:42:33.80] We have it where we can put it in equations. But remember, what we've said all along, the human story. We are all players in this story, with limited models and bounded perspectives. Why? Because we are all in the biological sphere. We are all within that human sphere. We don't stand outside it. We cannot be neutral, outside observers. We are participants in the process. Next one, please. [00:43:03.55] So here's the basic fundamental. You can think of this as the prime complexity penalty. This is brittle collapse. As systems grow in complexity, what starts to dominate their performance is a sudden collapse against the backdrop of continuous improvement and injection of new capabilities. As John Doyle and Jean Carlson put it 20 years ago, systems are robust to perturbations they were designed to handle, yet fragile to unexpected perturbations and design flaws. [00:43:36.20] And we really shouldn't have used the word flaws in those days. Because it's only a flaw in retrospect. And going into it, it was seen as part of the trade space of how to balance limited resources relative to potential gains. So systems are highly competent when events fall within their envelope of design for uncertainties. And remember, everybody misplaces that envelope, thinks it's bigger than it really is. And thinks the surprises that occur and challenges that occur are less frequent and less powerful than they really are. [00:44:10.82] And then they experience sudden, large failures when events challenge or go beyond. This means the pursuit of optimality, notice I didn't say optimality, said the pursuit of optimality, a human activity, increases brittleness. That's a theorem. That's a proven theorem. This isn't optional. This is the way this world works. Next one, please. [00:44:33.22] All right, so brittleness is a fundamental risk. And as I referred to earlier with biology, all adaptive systems develop a means to mitigate that risk. What that has led to in the last six years is the discovery of graceful extensibility. Fundamental discovery covers biological, cognitive, and human systems. All scales, all adaptive systems, no matter what scale you look at, have to possess the capacity for graceful extensibility. What's that? [00:45:00.77] It requires the ability to extend or stretch the boundaries challenges occur. Why? Because the risk of brittle collapse. In other words, viability of the system, in the longer run, is going to require stretching the boundaries no matter what. Put simply, viability requires extensibility. That's a hard universal constraint. That's like gravity for us, like gravity. That's how basic this is. Limited resources, regularly experience surprise. And we can see the surprising result in all kinds of settings. That's not the talk. But this is what-- pointing to you that there's a path to defeat the zombies. Next one. [00:45:48.19] All right, so the development of automata consistently ignore constraints like the ones I just pointed out that gives us repeated demonstrations of brittle collapse. And now here's a constraint on the biological sphere, the human part of the biological sphere that's different than physics. And it's different than straight biology, because of genetics. Designers can violate constraints on adaptive systems. But their systems can't escape the consequences of violating those constraints. [00:46:19.20] So the summary of what we've been talking about is systems, as designed, are more brittle than stakeholders realize, but fail less often. As people in various roles, not all people, not all the time, in various roles, adapt to fill shortfalls and stretch system performance in the face of smaller and larger surprises that are ongoing regularly. Some people in some roles are the ad hoc source of the necessary graceful extensibility. Next one. [00:46:48.97] Human story again, next one. There we go. That's why we say snafu catching. Snafu is normal. Systems are messy. One of the rationalizations for zombies is utopian. We want to create the new utopia. Which, of course, means didn't exist, doesn't exist, can't exist in its original coinage. And so we love this found sign. The system was never broken. It was built this way. [00:47:26.15] And that's a reminder. That no matter what you do as a designer, however smart and well-resourced, you can't outmaneuver all of the constraints of the biological world. Snafu is the natural state of systems. And biology invests in how to cope with this. Next one. [00:47:48.47] So human stories, with roots in fallacies, like oversimplification, and reification, hindsight, overconfidence framing. The desires, natural human desires, for a simpler, tractable, utopian world to escape from time marching on, pacing, change, complexities of various types. Yet, people can and have, we have the demonstrations, escape. [00:48:21.86] They revise courses of action. They revise models as new evidence arise. And what science is-- cognition, studying cognition, and how and when people can change concepts, reconceptualize. We can reframe. We can coordinate and synchronize activities over roles, and levels, and scales. As unfortunately, was demanded. It was demanded intensely in the pandemic. [00:48:48.36] And we fell surprisingly far short, despite the inherent difficulties of it. And we can and do exhibit graceful extensibility, stretching as events reveal and challenge what used to work. People, like all of biology, can develop the means to cope and thrive in the messy real world, the inherently and necessarily messy world, despite irreducible risks-- irreducible uncertainty. [00:49:18.16] Despite change continues, despite the interdependencies, and the multiple scales that interact and the kinds of fundamental trade-offs that govern this biological world adapt to the universe. It's hard to do it. Reconceptualization is hard. We've got great studies, Kevin Dunbar, for example. And we don't always do it. We get stuck. But we can. [00:49:41.78] Science is supposed to assist us in this process. It's supposed to help us understand, and therefore, outmaneuver the kinds of complexities in this interdependent, messy world. When we see through the veil of simplification. The struggle is uniquely human. Machines can't and don't. Machine designers don't even try or see the need to try. Though, ironically, as we understand the fundamentals, there are possibilities to design machines and the architecture within which these capabilities are deployed, interacting with human roles, in ways that could contribute to graceful extensibility and overcome the offsetting complexity penalties that come with growth and new capabilities. Next one. [00:50:33.70] So here's the ending. We have to outmaneuver complexity in worlds of surprise. The zombies say, try to hide this. We have to understand how adaptive capacities can be developed in systems that serve human purposes. As these are built, extended, sustained, degraded, and collapsed, the story is the story of adaptive capacity in the biological and human sphere. [00:51:03.78] This defines a new target of our lines of inquiry, the adaptive universe. We all live in it. We're always in it. The pressures, capabilities, conflict, and successes that we go through, private, it has rules. It has laws. It has hard constraints. But it doesn't remotely work the way we, most of the time, think it does. The zombies are a particularly extreme example of thinking of rules that aren't the rules of the adaptive universe. [00:51:39.50] Breaking those rules has real consequences. And that's what we see over and over again with brittle collapse, where a major contributor was the way we've deployed autonomous capabilities. No one is exempt. No one can hide from it. No one can stand outside it. We're all in it. [00:52:00.47] So I hope I have triggered you on what I think is at least a two semester course of study. There are indicators and pointers to wide ranging lines of research going on right now. It goes considerably beyond the capability of traditional lines of inquiry on the systems, whether we do the psychological, social psychological, organizational, whatever. There are real constraints and laws about how adaptive systems at any scale work. [00:52:36.40] It doesn't matter whether we're just discussing biological processes, neurophysiological processes, cognitive work, joint activity, organizational, society levels. The constraints matter. And we can create good architectures that solve the constraints in ways that sustain adaptability over cycles of change. [00:52:59.02] But working this out, working this out borrows significantly from every single line of inquiry, psychological, sociological, organizational, nonlinear control theory. Everybody has something to contribute. Everybody brings something valuable to the expedition. However, much of the standard stuff each line of inquiry brings is just baggage that retards the ability to understand and design systems better. [00:53:31.14] So we end up back where we started. What are with the appeal of the zombies of oversimplification? Zombies persist because then we don't have to struggle with facets of complexity, acknowledge fundamental trade-offs and constraints, or develop capabilities to outmaneuver complexity that we can't escape. We live in this universe regardless of the myths. [00:53:57.59] And the zombies just justify building systems that violate the laws, and regularities, and constraints of adaptive systems. And we can't escape the consequences and new forms of congestion, cascade, and conflict. Howling about how people and human error undermined their promises are just what zombies do. Thank you very much. [00:54:22.93] I have a little time for questions. I was also planning to chat with students. So whatever people would like to bring up, myths, favorite myths, favorite fallacies that sustain zombies, other zombie stories. I see some veterans of the wars of fighting zombies out there. I see some people who are engaged in fighting new zombie fights, John and others. So anybody want to fire away? [00:55:07.64] SCOTT MOFFAT: David, I have a question for you. Scott Moffatt, I'm not sure if you can see me. [00:55:11.99] DAVID WOODS: Yeah, I can see you. [00:55:12.89] SCOTT MOFFAT: Yeah, I enjoyed your talk very much. Though, I don't think I'm a zombie. I don't have myths, I don't think, to dispel. But one of the things I really liked about your talk was raising this issue of the reificiation fallacy, which has been-- or has sort of become-- I don't know, not maybe a pet peeve, but say an issue of concern of mine in my own field. [00:55:42.44] And I think you raising-- I would say that the reification fallacy is almost ubiquitous in science. And let me just say what I think-- how that manifests in my own field. So I'm interested in cognitive aging, cognitive neuroscience, things like that, as are many people on this call. And so ultimately, we're interested in behavior, and the brain mechanisms of behavior, and things like that. [00:56:15.99] And so a human being does something, or think something, all right, this has some neural instantiation, so there's some metabolic processes, and neuronal processes that occur. We measure these things pretty indirectly like with functional MRI. I'm not sure how familiar are with that field, but it measures blood flow and things like that. So it's a very indirect measure of neuronal activity. And then there's very sophisticated statistical analyses, to which that data is subjected to, including machine learning, and AI methods, and things like that. [00:56:57.72] And then the researchers will sort of examine that data, almost as if it's the thought of-- and the behavior of the person itself. So to the extent that the reification fallacy is sort of like the fallacy of the map is not the territory, I think that we can fall into that trap as well by treating the behavior of voxels and pixels on a screen as if it's the actual behavior and the thoughts of the person that we're trying to interrogate. [00:57:34.37] DAVID WOODS: Well, first off, I think you can trace the reification fallacy back to William James. I don't think he used that term for it. But I think it's there. But then again, most things in psychology are there. Certainly, I grew up-- I guess some of you don't know, I did grad school in cognitive psychology. My course of study a long, long time ago, there's a long standing battle between the forces you talked about. [00:58:14.92] And I would say it is a battle. In fact, science can and does escape it. It is constantly fighting a battle against various forms of reification. The history of the study of perceptual mechanisms is an example where this went on with the indirect realists, eventually versus Gibson and neoGibsonians. You can see it in neurophysiology. I still remember my readings of Sir Carl Ashley against the ways that we could misunderstand the function of a complex system and get trapped in trying to localize key capabilities in a particular piece of the brain. [00:59:01.70] These problems do continue today in science. Remember, the talk should have revealed to everyone that some of the people captured by the zombies are, in fact, scientists and engineers. They are completely captivated by them, because it gives them some value. Yet, at the same time, it undermines science and engineering when it has to deal with these complex holes that can't be understood simply by concatenating our understanding of the component subpieces. And that's the switch. [00:59:38.99] Now, ultimately, I think the change in the recent, certainly in the last decade, is that we have ways to study this. And there is a unity of insight that comes from studying at different levels and scales. Studying some in biology, studying some, as you were doing with neurophysiology, interact with cognitive aging. Studying it at the scale of societies responding to global disruptions. [01:00:09.50] That helps us see through the veils, as traditional in science, and start to realize what's underneath about how this world works. And I think, I mean, I think it's really interesting. I mean, I just did a talk this summer on taking how the advances in biology on how genetics can select for future adaptive capacity. How can you-- on contingencies now, select for future adaptive value? [01:00:39.05] Yet we have an existence proof of it in our everyday life with the virus, which keeps adapting very well to human, incoherent human behavior that is inadvertently maximizing the evolutionary potential of a class of viruses and plaguing all of our lives. So to me, the hope is to comes back to how do we escape. And we escape, despite the prevalence of the zombies, not by fighting them directly. [01:01:09.66] But by going behind the dust cloud they create and understanding better the fundamentals of this world. And interestingly, I think those fundamentals all take us back to different senses of how people adapt, or how biological systems or how parts of human systems adapt. If we understand these cycles and spirals of adaptive processes, I think we'll make great progress. And I think we are. It's still a minor voice in the howling of the zombies, sometimes hard to get heard, because those zombies can really screech and drown you out. [01:01:48.33] SCOTT MOFFAT: Thanks for the discussion. [01:01:54.29] DAVID WOODS: Oh, John Lee you're there. Ben Shneiderman, you're there. John, what do you think about the current controversy? Oh, maybe he's not there. Only his avatar's there. Any other comments, questions? Erin how are you doing? You're trying to teach people some of these things, I understand. [01:02:36.65] ERIN CHIOU: Yeah, Thanks for the talk. I shared the link with the students in my Human Automation Class. And some of them are here as well. [01:02:45.31] DAVID WOODS: Yeah, I mean it's funny when we start out with the mechanical automative fads and then we do Fritz Heider's demonstration that we can't help but ascribe intentions to mechanical things from the '40s. And it's always fun to have people make up stories about the triangle and circle moving around on a screen. But that's how we get them going to start seeing differently. So they aren't trapped by the zombies, even though the zombies can swarm around us. [01:03:23.81] And I think at your school, I mean, at this school, Georgia Tech, hosting this, recently a professor there commented about her biggest challenge being the engineers walk into her class thinking, you're going to teach me how to get rid of those troublesome people. So that my automation will just work well. And she's like, uh oh, I've got a cloud of zombies in the class. That you have to have to extract them and pull them out. And rerelease these people and reopen their eyes to the way these systems work. [01:04:02.07] And ultimately, that is, as Ben and I have talked about, it comes from showing what you can do. You have to show what you can do. And for example, with graceful extensibility, we've been showing different ways you can use it. So we've done it for classic aviation risk. And because sometimes people like to yell it and say, you've got to show me equations. [01:04:24.63] Well, here's the equations. Then their reaction is always, I didn't mean those equations. And I certainly didn't mean equations that would contradict what I believe. You're supposed to show that I was always right and that I don't have to do anything else. Or anything else I have to do is small and actually ratifies how brilliant I am. That happened the first time when I walked in to a room full of engineers who were supposed to design new alarm systems, where alarm overload was the root problem. [01:04:54.39] And we went, oh, I understand how to do that. And we showed them the math, Sorkin and Woods, 1985. And how the design implications, and we actually really did design one that avoided alarm overload back then. But they were very upset, because it was no, no, no, not that kind of math. I don't understand that math very well. And it gives me the wrong answer, because I have to do something different. And that creates, takes time, it takes money. And it ends up in a very different system that I'm not familiar with. [01:05:34.50] ERIN CHIOU: Yeah, and I think another way of battling zombies is not just talking about the equations, but also the assumptions. So how are you defining your terms. And maybe one additional way of battling those zombies is through education, really dialogue with students and engaging them in things that might be very sexy at first, but then hopefully having them think critically as a result of exposure to different ways of thinking. [01:06:02.60] DAVID WOODS: Well, and we do that classically. And we will be doing that again in a new project we just won. But we do it in class projects, too, which is we do a convoy. And so take a convoy and add autonomous capabilities to the convoy. And this is a classic in AI goes back 30 years from a classic paper. [01:06:27.26] And so you give them a thing. And you give them a mission. The convoy and its resources, activity, is in a place with-- a physical environment with constraints. And then you-- and then you have a mission. And you have disruptions that can arise trying to accomplish that mission. And it's wild, because what happens is they have to design an integrated system. And as I like to say, if you're designing an integrated system to support a mission today, you will certainly make use of some autonomous capabilities. [01:07:03.62] They're capabilities. But if you try to maximize autonomous capabilities, you'll never generate an integrated system that really accomplishes the mission. And that's what they learn. Is that the autonomy ends up being a minor part of the story. And the major part is how they deal with limited resources that come to the fore when they've lost some assets, there's a disruption, there's uncertainty, there's risk to trade-off. And all of a sudden, all the things that they have to wrestle with have almost nothing to do with the automation. Ben, you turned on your camera. You're in your car. [01:07:39.81] BEN SHNEIDERMAN: Hi, David. That's why I couldn't talk till now. I pulled over. So thank you for your entertaining, and relevant, and spirited talk, which I think is right on. You really educated me about zombies before. But this solidified the notions in a very good way. I like the simplifications and the reification fallacies are both-- are important components. [01:08:09.60] What my question is, or my comment is, one of the things that I learned from you, I thought, was that you actually can't kill the zombies. That people will continue to do these things. And that I picked up from you the idea that, actually, what you need to do is let the zombies fester in the dark valley of bad ideas. And that you have to rise up to the sunny hillsides and provide a better alternative. What do you-- is that something you can get behind? Do you see ways that succeeds? [01:08:47.39] DAVID WOODS: Absolutely. Remember, right off the bat. What was the-- what was our discussion? We quickly went back to William James. We went back to Sir Carl Ashley in the '30s. These issues aren't new. And they don't just exist in the field of autonomous technologies. So yeah, I mean, we all thought indirect realism and understanding perceptual systems was crazy when we were in grad school. How could people think this. This is obviously not a way to understand this complex mechanism. [01:09:26.50] And yet, it persists. And it's re-emerged recently yet again in studies of perceptual mechanism. I was shocked to see it again. So you're absolutely right that the key is to get to that sunny hillside, that sunny city on a hill where we can-- and that comes from more-- and that's what I was trying to get across. The idea that there is fundamental science, progress, and possibilities. [01:09:53.17] But they don't come from any single line of inquiry of traditional areas. It comes from integrating across those diverse lines. And that that's really hard to do. It's really post-disciplinary. And I think that's one of the impacts of the scale, and effects, and what the capabilities have done, and what they've revealed in new kinds of challenges. That we have to find new ways to synchronize and integrate across diverse perspectives. [01:10:20.83] But I'm feeling good. And the reason I'm feeling good, Ben, is very-- what did I say? It's all about people pursuing advantage. We just won a big contract to show how to graceful extensibility to build real systems. So I'm feeling pretty optimistic. Talk to me in six months. And it may be completely different. [01:10:40.01] BEN SHNEIDERMAN: I think that's the right idea. I mean, I think you could point out the zombies. But every time you mention them, you're giving them new life. And even when you say ramification fallacy, and you point out all those negative words, they just get more life. And graceful extensibility, resilience, yes, give me the good words. Give me the good words. Tell me the stories that work. And show me the way forward. [01:11:07.81] I mean, that's what I've learned from you. And so I want to hear more of that. The zombies are a great way of labeling the problem. And I think you label them, let them fester in the dark valley. And come on up, and let's talk about the new directions. [01:11:25.08] DAVID WOODS: And that, let me give you-- first off, I want to say this is why I never gave the zombie talk till now. Because I've circulated versions of zombies for years privately. But this isn't a talk. This is why I wrote to you the first time when we were talking about your new book. So you're exactly right. [01:11:45.85] But let's take the immediate one, with self-driving modes in cars. It is an example of really a relatively simple concept. It goes back to work Emilie Roth and I did in the mid '80s. It's very relevant today, and it's simply that there is a confidence envelope for automata. It is confident, but that confidence envelope is limited. And so now you make that a dynamic parameter. [01:12:18.64] You shrink it and expand it, depending-- as close to real time, based on as much information as you can gather about the engineered capabilities built in. About the context in which it's trying to operate, and the consequences of misoperation or breakdown. And based on that, you can expand or shrink the confidence envelope. And you get away from this howling of zombies from Tesla fanboys attacking a distinguished scientist. I mean harassing-- [01:12:56.93] BEN SHNEIDERMAN: Right on, tell me more about the language. [01:12:59.50] DAVID WOODS: This will work. Now, all I got to do is get somebody to fund us. John Lee, let's go do this somewhere. The problem I've had over the last 10 years of trying to do the-- develop a path to the sunny hilltop is the zombies have misdirected the funding flows. So that we can't tackle some of these and do the demonstrations. We're slowly building up a repertoire of success cases, where you can show tangibly, in a specific setting, that these ideas work. [01:13:35.18] But shifting the bulk of investment to pursue these things in critical digital services, software as a service enterprise systems, we've got a tremendous success going. We've got new companies starting. We've got widespread. We've got companies hiring. Whole departments of resilience engineering. I just heard about a guy becoming named the head of the Department of Resilience Engineering for a software company. I mean, it's damn slow. But we're making progress. [01:14:06.01] BEN SHNEIDERMAN: Yeah, great. I think the point I would take further is to change the language. When you say reframing, what to me, it was changing the metaphors and the language. So instead of intelligent agents, and partners, teammates, collaborators, and social robots, and assured autonomy, what I talk about is AI infused super tools, and active appliances, and telebots and control centers. And I think, I believe, anyway, I'm trying to change the language and the metaphors that we use. So that's-- does that work for you? [01:14:47.01] DAVID WOODS: We have all tried that. And let me show you a scar. So yeah, I think it's a quite reasonable thing to do to change the language. I'm not sure-- and some people are going to have to tell me how you do that better than what I've ever tried. And I've tried it too. And sometimes I make too much compromise to the prevailing words and metaphors. Sometimes, I've tried to be more radical change from the conventional words. [01:15:25.89] I mean, we started Joint Cognitive Systems 40 years ago. And we're still around. It's more relevant than ever, more places than ever. But it's still a minor, minor, minor, minor thing that most people have never heard of. And so it's certainly worthwhile. I think it probably needs something else. I think it goes back to your previous point about we need the tangible successes. [01:15:55.70] And we need them in a way that go beyond custom design. We've built something that can-- that other people can take advantage of and run with without being sophisticated experts in the things we're talking about. It's an old line, and I forget where I got it from, if you want to-- in my multiple attempts to start new fields, which haven't all gone as well as I would have hoped, the old line I found way, way back was you have to find a way for the average practitioner to do-- the average competent practitioner to do valuable stuff for clients, for customers, for stakeholders. [01:16:37.37] And that in many of our developments, it takes sophisticates with a great deal of expertise to deliver the goods. And how do we get it to the point where average? I think some of that has happened in the stuff you and I have both done with visualization and representation. It's still trying to get it to be scalable by competent, but not expert people has been a bottleneck. [01:17:04.58] BEN SHNEIDERMAN: Great, thank you. [01:17:12.83] DAVID WOODS: All right, folks, well, this was fun. I appreciate you guys giving me the opportunity to roll out the zombie talk. And the fact that it ended up being remarkably timely, given current events. I hope people got a little bit out of it. And John, if you're there, John Lee, we need to talk about things. [01:17:40.88] We can connect base, take this opportunity to say, hey, respond to my the current crises going on. We should probably do something. Anyway, thank you all. Thank you David and others, Georgia Tech, for putting this on and recording this, and putting up with the fact that I am not set up to use BlueJeans properly. [01:18:09.53] DAVID GRIMM: Yeah, thank you once again for a giving the talk, obviously, and persisting through the initial difficulties. [01:18:17.35] DAVID WOODS: All right, everybody take care.