PHOUCHG: 75,000 generations ago, our ancestors set this program in motion. LOONQUAWL: An awesome prospect-- Deep Thought prepares to speak. DEEP THOUGHT: Good evening. PHOUCHG: Good evening, oh Deep Thought. Do you have-- DEEP THOUGHT: An answer for you? LOONQUAWL: Yes. DEEP THOUGHT: Yes, I have. LOONQUAWL: There really is one. DEEP THOUGHT: There really is one. PHOUCHG: To everything? To the great question of life, the universe, and everything? DEEP THOUGHT: Yes. LOONQUAWL: And are you ready to give it to us? DEEP THOUGHT: I am. LOONQUAWL: Now? DEEP THOUGHT: Now-- LOONQUAWL: Wow. DEEP THOUGHT: Though I don't think you're going to like it. [MUSIC PLAYING] CHARLIE BENNETT: You are listening to WREK Atlanta, and this is Lost in the Stacks, the Research Library Rock'n'Roll Radio Show. I'm Charlie Bennett in the virtual studio with Marlee Givens, Wendy Hagenmaier, and Ameet Doshi. Each week on Lost in the Stacks, we pick a theme and then use it to create a mix of music and library talk. Whichever you are here for, we hope you dig it. AMEET DOSHI: That's right, Charlie. Today's show is called AI on the Bias. MARLEE GIVENS: That I would be artificial intelligence. WENDY HAGENMAIER: Is this another show about how Skynet is going to take over and destroy us all? AMEET DOSHI: Not this time-- well, maybe a little bit. There are forces for good working in AI to make sure that the best parts of humanity are in our AI systems. MARLEE GIVENS: One of those forces is Georgia Tech's own Ayanna Howard. And we'll be speaking with her about how bias creeps into AI, and how to get it out. We may also learn a little bit about her podcast. AMEET DOSHI: Our songs today are about the human conflicts reflected in our systems, the good of humanity showing through, and the voices of all humans in AI research. It's got great potential, but in the hands of an elite few designers, even the slickest code would result in a system failure. We don't want another Skynet. There it is. So let's start with "Total System Failure" by Juliana Hatfield right here on Lost in the Stacks. [MUSIC - JULIANA HATFIELD, "TOTAL SYSTEM FAILURE"] OK, I gotta go-- This is Lost in the Stacks, and joining us online is Dr. Ayanna Howard. She is a roboticist. She is chair of the School of Interactive Computing in the College of Computing here at Georgia Tech. She is the director of the Human-Automation Systems Lab, which is abbreviated as HumAnS, and her most recent publication is the audio book Sex, Race, and Robots-- How to Be Human in the Age of AI. Dr. Howard, welcome to Lost in the Stacks. AYANNA HOWARD: Thank you. I'm excited about this. FRED RASCOE: We are excited to have you here. Artificial intelligence, one of your areas of expertise-- it's rapidly disseminating into lots of areas of our lives. And that's going to include libraries in the near future, I believe. So it's great that we get to speak to you about it, because as you are an AI scholar, an artificial intelligence scholar, you are very deliberate in your work in focusing on the human aspect of artificial intelligence. And AI is kind of a scary concept. You Think of robots and terminators taking over the world. So it's nice that there's a scholar thinking about the human side of artificial intelligence. AYANNA HOWARD: Yeah. The scary AI, that is the human side as well. As humans, there's always the good and there's always the bad. And I think, a lot of times, in science fiction and in movies, you show the bad part of people, like the villain. And so the AI villain is really just bad people. But I believe in AI. I think it can have great impact for us in a positive way, so I look at the positive side of people. FRED RASCOE: When you started on your journey to becoming a roboticist, did you always have that in mind? And I mean the human along with the artificial, like there's human, and good and bad, and it works alongside the technology. Or did you start out just being interested in the technology and then realize later bringing in the human element? AYANNA HOWARD: No, so actually, my first love and why I was interested in robotics back in the day, when I was young and innocent, was motivated by the Bionic Woman. And so think about it the Bionic Woman was this intersection of human and technology, and she was saving the world. It was things happen-- bad people-- and you use this blend of human and machine to basically save the world. That's what I wanted to do. That's why I got into robotics in the first place. FRED RASCOE: Tell me you were running around your neighborhood pretending to be the Bionic Woman. AYANNA HOWARD: Well, so no, no, no. I didn't want to be the Bionic Woman. I wanted to build her. So there's a difference. FRED RASCOE: Oh, right. That montage at the beginning, when they say, we can build it, we can save her-- AYANNA HOWARD: Yes. FRED RASCOE: You wanted to be one of those people around the table. AYANNA HOWARD: Yes. That's what it was. FRED RASCOE: So as you went through your academic career with that goal of being one of those scientists around the table building the robotic future, what was your experience in libraries, either in school or in your academic post-primary school career? AYANNA HOWARD: Yeah, so my experience with libraries, I would say, started as a high schooler. So back then you had a lot of books in libraries, and you had things called typewriters. and so what happened was any time you wanted to-- and I didn't have a typewriter at home. So any time you wanted to do a nice essay and do research, you would go to the library, and you would look things up, and you could borrow the typewriter that was there and type out your term paper, as an example. So the library was part of my natural understanding of what it was to be a student, honestly. And then, when I went to undergrad, that was where I found out that you can find a lot of information that the professors weren't teaching you in class in the library. They would talk about these famous people and all these algorithms, and you're like, I don't understand this faculty member's saying. And you would go to the library, and you would find whoever they were talking about, and guess what-- half the time, the answers from the exam were in the book. The library was my friend in undergrad. It was probably my best friend for a couple of classes, I will tell you. FRED RASCOE: So in your work now, you write a lot about bias and how bias makes its way into these AI systems that are developed by scientists and researchers. And so I wonder if you could talk about bias that you encountered in your academic career. And I'm not necessarily talking person to person bias, but the systems that you dealt with in your educational environments, processes, bureaucracies, even tests, or maybe even in libraries. Can you talk a little bit about the biases that informed your experience? AYANNA HOWARD: That's actually a good question. So I will tell you one bias that has to do with technology and how it's built that impacted me. So my first career was working at NASA, and one of the things about when you work at NASA-- it's space, right? And I actually wanted and thought about going into the astronaut program. And I remember investigating this, and looking at the application, and starting to fill it out, and discover that there's a bias in the technology. I'm on the short spectrum. I'm 5 feet tall. And the suits, all of the technologies, all of the human subject testing were not done for someone who's my height. It also means that you can be too tall. So if you're over 6 feet, you, at the time, could also not be an astronaut. And so think about that. That's bias in-- someone decided that we want astronauts between this range, and so then we're going to test the technologies that we're designing around this range. And everyone else, yeah, we think they exist, but we really don't care about them. And so that's an example of a bias and not providing an opportunity, because I'm a brilliant roboticist. Imagine if my robots were up in space. We'd be at Mars now. And so think about what opportunities you've excluded from doing those kind of things. And so that's an example of a decision that was made that impact our technology, and it was a human decision. CHARLIE BENNETT: We'll be back with more about AI and bias with Dr. Ayanna Howard after a music set. AMEET DOSHI: File this set under BD450.C73. [MUSIC - OH-OK, "PERSON"] I am a person. I speak to you. [MUSIC - BECK, "SCARECROW"] WENDY HAGENMAIER: That was "Scarecrow" by Beck, and before that, "Person" by Oh-OK. Those were songs about your human side showing through, no matter how hidden you think it might be. [MUSIC PLAYING] MARLEE GIVENS: This is Lost in the Stacks, and we're talking about artificial intelligence and bias with Dr. Ayanna Howard, chair of the School of Interactive Computing here at Georgia Tech. After talking about how human AI can be in the first part of the interview, we started the next segment by asking how those human biases can creep in to AI systems. FRED RASCOE: In your decision to study bias in your academic work, are you thinking about mitigating those types of things? AYANNA HOWARD: Yeah. So in my research, we not only study bias-- and so that's really to understand the depth of it. Where does it come from? Not necessarily why does it exist, because it's because of people, but how does it impact our behaviors? But then the aspect of mitigating is, how do we design technology so that it enhances our ability to recognize our own biases? And I would say fix ourselves, but also mitigate our own aspects, because one of the things about bias is a lot of times, we don't even know it exists until someone brings it out. And yet, as people, a lot-- most of us, if someone says, you know, you've never done X, or you've never had a female student in your group, a lot of us will look back and be like, oh, you know what-- I never even noticed. And that's the thing about bias. And so I think part of mitigating is identifying it, bringing it out, and then coming back and feeding the algorithm, saying, OK, here it is-- this exists-- and having us as people also change ourselves based on that. MARLEE GIVENS: We deal with a lot of human-created systems in the library world, including the library catalog, the subject terms that enable you to search for the research that you described. I loved your description of-- first of all, I think you're the first person who ever mentioned going to the library for the typewriter, which I think we forget. But going to do that additional research, the things that you found, having to maneuver the library catalog-- all of that description and classification was done by people. A lot of it is based on a, frankly, white supremacist past. I was wondering if you were aware of that, first of all, and second of all, if there are possibilities for mitigating that through applying artificial intelligence. AYANNA HOWARD: Yeah. So I'm going to give you a modern-day example of this. So the libraries I think about as information. It's really the collector of information that's provided to people, and that information traditionally, in the past, was in books. Now it's primarily online. But it should be curated by people who understand that information is power. And so an equivalent of that would be the way we do search. If you think about the, back in the day, Dewey Decimal System, if you think about how information was organized, if you were looking up something-- say I wanted to look up a scientist. If I designed it such that all scientists-- the male scientists were found first and female scientists were found last, as a human, I'm not optimal. I'm going to go to the first three, just like we do when we do search, and I'm going to choose that. And so what that means is you've now limited the type of information that is accessible to everyone. And that you might not even realize that you're ordering it in some form or fashion, because you're human and this is what you're used to. I think, in that regard, libraries have a responsibility to think about bias and how information is organized so that those kind of limitations aren't there, because it's how we form our opinions about others. MARLEE GIVENS: Taking this thought experiment further, if you were designing, or helping us to design a new way of cataloging our collections, would we need to address the bias and our own rules and processes first, and then apply the technology, or does technology work alongside that? FRED RASCOE: This is a free consulting session for the library, by the way. AYANNA HOWARD: So I think it's a combination of working together, because there's no answer. We don't know, so we have to work together in order to find the solution. As an example, in AI, there's a methodology-- machine learning, there's a methodology called simulated annealing. And what that means is that you're searching for an answer-- so thinking about information-- and every so often, you introduce a random function. So you mix things up a little bit. And so you start searching in a different area. What that means is, as a human, if you do a randomization and you go to someplace that you-- it's absolutely like no, there's no way, it takes human expertise to be like, yeah, the algorithm is wrong. And so how do you incorporate the human understanding of this, but also the randomization that the machine learning algorithm can do so that you are exploring in spaces that remove some of the biases? So that's an example of working together. FRED RASCOE: Yeah. Another big area where we're bias has a really evident impact up front-- and I'll put this just in a little bit of current context, because we're recording this interview on the afternoon of January 6. And right now there is a mass of people at the US Capitol having a riot based on faulty, incorrect information. And one thing that librarians like to emphasize in our mission is information literacy in our patrons, guiding them to the best information, knowing what to discard and ignore. But also, we see how algorithms play into the inaccuracies of information that gets disseminated. So how do we incorporate AI? Because we know we're going to have to. It's coming. AI is going to be incorporated into lots of systems. How do we make sure that is incorporated in a way that we're not making this information literacy problem worse? AYANNA HOWARD: I think one of the areas that AI can help is in, I would say, tagging and highlighting when you do have misinformation, which, again, it's-- I say that like, oh, yeah, this is how you do it. It's actually a little bit hard, because if the-- I would say, the depth of the number of people who believe in the misinformation increases, then it's hard to then for the AI to identify that this is misinformation or this is true. And we see this in various communities, not just in the US, but even worldwide. We have communities that all believe something that the rest of the world thinks is not true, and it's because everyone in that community believes it. And so you have this confirmation bias, basically. But I think what AI can do is help us break our confirmation bias. The problem is that it makes us uncomfortable as people. And so they've done some research, in terms of the neuroscience and what the brain does when you have these pieces of information that don't align with your own values and beliefs. It really messes you up in your head, honestly-- makes you uncomfortable. And so breaking people out of their confirmation bubble-- I think we could do that with AI. I think we can learn of gentler ways to do it, though, so that you don't totally mess up that aspect of your confirmation bubble-- this is your comfort zone, this is your truth, and now you're telling me that everything I believe my entire life is wrong? It's hard. FRED RASCOE: Does that mean that humans are going to have to be constantly maintaining the AI systems? And what I mean by that is it seems like the goal of AI is you can get it advanced-- I know we're not there yet, but you get it advanced enough, and it just can do it on its own without human intervention. Is that actually the goal, or is there actually a maintenance-- a real human maintenance? AYANNA HOWARD: There's real human maintenance. And actually, I won't call it maintenance. I would call it adaptation. So one of the beautiful things about people is that we adapt. The norms now are different than 40 years ago, 100 years ago. The norms, I would tell you, 100 years from now are going to be different than the norms we have today. And so I think, when we think about AI and maintenance, it really is about AI and adaptation, but it requires us as people to feed our change in norms, our change in behaviors into the system so that it can adapt. Otherwise, you'll have these AI systems that will hold us in a historical past that is inaccurate, because we do evolve as people. That's the beautiful thing about us. FRED RASCOE: You're listening to Lost in the Stacks. We'll be back with more from Dr. Howard on the left side of the hour. [MUSIC PLAYING] LEE VINSEL: Do you want me to say my full name? CHARLIE BENNETT: However you want to be immortalized over, probably, the Melvins-- LEE VINSEL: OK. CHARLIE BENNETT: --is how you should introduce yourself. LEE VINSEL: Hey, this is Lee Vinsel. I'm one of the co-directors of the Maintainers. And you are listening to Lost in the Stacks on WREK Atlanta. CHARLIE BENNETT: Today's show is called AI on the Bias. We're fortunate to have Dr. Ayanna Howard as a voice pointing to potential harm that bias and I can bring. She is, however, by no means the only voice. So let's hear a quote from one of her fellow researchers, who is also a woman of color researching AI. [MUSIC PLAYING] For me, the hardest thing to change is the cultural attitude of scientists. Scientists are some of the most dangerous people in the world, because we have this illusion of objectivity. There is this illusion of meritocracy, and there is this illusion of searching for objective truth. Science has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level. We need to change the way we educate people about science and technology. Science currently is taught as some objective view from nowhere, from no one's point of view, but there needs to be a lot more interdisciplinary work, and there needs to be a rethinking of how people are taught things. Those were the words of Timnit Gebru, then of Google, quoted in the New York Times in 2019. Less than a year later, she was forced to resign her position, and some critics say it was due to some unpleasant realities of AI bias that she uncovered in her place of work. Clearly, we have a long way to go yet. File this set under TA166.H84, and do not think about Skynet. [MUSIC - MICHAEL CRONIN, "I'VE GOT A REASON"] [MUSIC - ULTRAVOX, "SOME OF THEM"] MARLEE GIVENS: You just heard "Some of Them" by Ultravox, and before that, "I've Got a Reason" by Michael Cronin-- songs about the human conflict that emerge out of human-created systems. [MUSIC PLAYING] CHARLIE BENNETT: Welcome back to Lost in the Stacks. Today we're speaking with Dr. Ayanna Howard. And in addition to being a roboticist and an expert in bias in artificial intelligence, she also hosts a podcast here at Georgia Tech-- she's the competition-- where she discusses all facets of those issues. The Interaction Hour investigates the impacts of computation on life's big issues. Here's a sample from a recent episode. AYANNA HOWARD: Think about the most recent news headlines you might have read. Was it completely objective, void of any suggestions or languages that might lead readers down one particular path of understanding or another? Or did it more likely contain subtle cues about how the message was being framed, casting doubt on its veracity or reliability? Every day we are inundated with these types of texts that, on the surface, proclaim to be arbiters of truth, but due to simple word choice and message framing, can bias their consumers. CHARLIE BENNETT: As academics who host a podcast ourselves, we feel an instant kinship with anyone who can communicate scholarly ideas in an audio format. We asked Dr. Howard how she feels about podcasts as a way of communicating research, and how she manages to fit that in among her many other responsibilities as a faculty member and school chair. AYANNA HOWARD: So I think the podcast, as an academic, is very, very important as a communication mechanism. And it's primarily because I think, as academics, we sometimes-- and this goes with the confirmation bubble-- we know our own language, which is typically claimed. It's typically, at least as an engineer, computer scientist, as algorithms. It's math. It's derivations. The rest of the world doesn't necessarily read the world in that same way, whereas the podcast-- it's verbal. And that's something that every person on this Earth understands, verbal communication. Now, ours is in English, and there's other languages, but that form of language and communication is just-- it's nature to us. It's natural to us. And so I think, when we have these podcasts, what we're doing is we're expanding the ability for people to understand what we are doing as academics. And in that way, we are expanding their information source in a way that is digestible. FRED RASCOE: You're right. I think it is nature to us, that desire to communicate. And it just makes me think, with artificial intelligence, we're trying to create the natural in something that's not natural. So is AI eventually going to be able to do this podcast? AYANNA HOWARD: Um, yes. [LAUGHTER] FRED RASCOE: Oh, man. AYANNA HOWARD: Yes. That's actually not that far off. We already see that there's AI that is helping journalists write articles, doing some of the basic data that's out there, in terms of name, places, events. And then the journalists adds their creativity to that. That already happens. That already exists. It hasn't quite evolved into the podcast, but it has evolved into chat bots, and the Alexas, and the Siris. And so it's not that far removed. FRED RASCOE: How do you feel about that, as a podcaster and as a communicator? Scientific journals-- those articles could be generated by AI as well. AYANNA HOWARD: Correct-- so as a podcaster, I do know that some of the elements that make us creative will still exist. As an example, we don't really have the intonation or the humor yet in the podcast, and so it might be that the AI creates the script and the AI is more of a prompt. I'm here, and instead of me trying to think of my questions, it'll just say, here's a great question, and find something that the person just wrote about or said in another podcast, and be like, ask them about this. And so that enhances the podcast, because now, as a podcaster, it's giving me information in real time for me to expand that experience for the listeners. And so it's this blended human and machine to do the podcast. MARLEE GIVENS: More like AI as a colleague than an overlord. AYANNA HOWARD: Yes, yes! FRED RASCOE: So since you're now one of the leading roboticists in the world, you think you could hook me up with one of those bionic eyes? AYANNA HOWARD: Why would you want one, though? FRED RASCOE: Oh. We have been speaking today with Ayanna Howard. She is chair of the School of Interactive Computing in the College of Computing here at Georgia Tech, and her most recent publication is the audio book, Sex, Race, and Robots-- How to be Human in the Age of AI. Dr. Howard, thank you so much for joining us. AYANNA HOWARD: Thank you. This was fun. [MUSIC PLAYING] WENDY HAGENMAIER: File this set under Q175.5.W453. [MUSIC - NEIL YOUNG, "MY NEW ROBOT"] [MUSIC - CURTIS MAYFIELD, "CAN'T SAY NOTHING"] (SINGING) Can't say nothing. AMEET DOSHI: That was "Can't Say Nothing" by Curtis Mayfield-- before that, "La Voix Humaine" by Los Microwaves. And we kicked the set off with "My New Robot" by Neil Young-- songs about bias and the voices of communication. [MUSIC PLAYING] CHARLIE BENNETT: Today's show is called AI on the Bias. Our guest, Dr. Howard, suggested an AI-human collaboration that could be a way to make podcasts in the future. Can any of you imagine a future AI-human collaboration that you'd like to see? Wendy? WENDY HAGENMAIER: Well, I know there's definitely cool stuff going on with archives and AI-- some research on how we might be able to collaborate with an AI on doing things description of really, really large data sets or digital collections. But I would also love to see anything that helps improve the freeways in Atlanta. How about you, Marlee? MARLEE GIVENS: Those are so good. I was just thinking about someone to mediate when my husband and I are saying, what do you want to eat? I don't know. What do you want to eat? Ameet? AMEET DOSHI: That's a tough one, because I am thinking about Skynet, but-- does seem like there's been some nice advances with breast cancer in particular and the use of machine learning to identify cancers that a radiologist may be more challenged to see-- so seems like, in some parts of medicine, might be a good thing, but a lot of caveats there. CHARLIE BENNETT: Well, Ameet, I wish you'd gone first, so I didn't have to go right after yours, because the one I'm thinking of is an AI-human collaboration to figure out how best to arrange my records to maximize autobiographical history and effective browsing. But the breast cancer thing-- that sounds good too. OK, roll the credits. [MUSIC PLAYING] Lost in the Stacks is a collaboration between WREK Atlanta and the Georgia Tech Library-- written and produced by Ameet Doshi, Amanda Pellerin, Charlie Bennett, Fred Rascoe, Marlee Givens, and Wendy Hagenmaier. MARLEE GIVENS: Today's show was edited and assembled by Charlie, and brought to you in part by the Library Collective and their social and professional network, League of Awesome Librarians. You can find out more at thelibrarycollective.org. WENDY HAGENMAIER: Legal counsel and the code for a legal eagle AI were provided by the Burris Intellectual Property Law Group in Atlanta, Georgia. MARLEE GIVENS: Special thanks to Ayanna for being on the show, to the School of Interactive Computing here at Georgia Tech, and thanks, as always, to each and every one of you for listening-- WENDY HAGENMAIER: Find us online at lostinthestacks.org, and you can subscribe to our podcast pretty much anywhere you get your audio fix. CHARLIE BENNETT: Next week, we continue our COVID restriction schedule with a rerun, and there'll be a new show the week after that. AMEET DOSHI: It's time for our last song today. And to close out this show about highly advanced, cutting-edge technology, a song about a fondly remembered nearly obsolete technology-- Dr. Howard mentioned using the typewriters in the library when she was an undergrad, and I'm for sure old enough to remember the room full of typewriters in the University of Tennessee Library full of people late at night cramming to get that essay done, sometimes while smoking. CHARLIE BENNETT: I'm sorry, Ameet. That was a really long time ago. AMEET DOSHI: It was indeed. So this is "Typewriter" by Louis Rankin, right here on Lost in the Stacks. Have a great weekend, everybody. [MUSIC - LOUIS RANKIN, "TYPEWRITER"] (SINGING) Drum pre art talk, meh typewriter start brawl. No '45 no bawl--