Okay, we're going to get going then. I'm going to welcome all of you here to our event on AI, art and Afro Futurism, Steam Learning with Dr. Natrice Gaskins. I'm your host for today. I'm Dr. Lisa Yazek. I'm a Regents Professor of Science Fiction Studies. And we're really thrilled to have you all here today. As many of you know, Georgia Tech has had a very longstanding commitment to the speculative arts and all their forms for well over 50 years now. And we're excited to continue this tradition by bringing back one of our own right dim alum, Dr. Gaskins. So really quickly I just want to thank everyone at the library for making this event possible. And particularly Katherine Mansey, Matt Frizell, and Allison Reynolds for your specific help as the brain trust behind this hall. And of course, I want to thank all of you for being here today. Special shout out to my students from women, science fiction and fantasy and horror class and from my global science fiction class for all of your being here today. Yeah. Give yourself a round of applause. And then better yet now you're going to applaud because I'm going to stop talking. And I'm going to allow my community partner, Dedrn Sneed, who's the owner and operator of the speculative arts incubator subsume studio. Say a few words before we get to the main event. Dedrn, Come on up. All right we. Good day y'all hope everyone's doing well. And it's just privileged to be in the space and be able to imagine and dream not only the works of Dr. Gaskins, but this fantastic space that we have here during Black History month, or as I like to site as Black Futures Month. Where we're still in the space of innovation and opportunity and advocacy for all. And just who am I, why I'm up here? So Sneed. I'm owner operator of Subsumed Studios right up the street in Atlanta underground. So we work in the space of creativity, technology, and community where we will look to be the infrastructure of education and advocacy for all and using Afro futurism as a tool and a resource for social and civic service. And so to my right here would love to more formally introduce Dr. Nutrice Gaskins and had to go ahead and get the requisite notes out. But then I have just a little something to say about that, if I may. Dr. Nutrice Gaskins is an African American traditional and digital artist, academic cultural critic, and advocate in Stem fields. She has her Bachelor of Fine Arts from Pratt Institute, Master of Fine Arts from the School of Arts Institute in Chicago, and a Phd from the Georgia Institute of Technology. You can't give a round of applause for that, being in the space. Now to go off script, Dr. Gaskins and I, you know, just as a way in a space to connect first was as a fan, much like you all, I was enthralled by her work in the space of not only her book of, and I'm going to get it right, techno nacular creativity, innovation, culturally relevant making, inside and outside of the classroom. And so it was as all good things and all good meetings and relationships in life. It started with a house party somewhere at MT, all right? And so have years of trying to meet her here, meet her here in Georgia and in other spaces. It just so happened that one fateful night that I was working for a place that she had worked for and got invited to the owner's family home. And lo and behold, when the music started, who would come in? It was almost by fate that Dr. Gaskins and I would meet and would go from a fan boy to a collaborator. And I would say just say, a cherished connection and friend and really an inspiration for the work that not only we try to do at subsume, but in the space of Afro futurism. And I will close to say that I feel that Dr. Natrice Gaskins is the nexus of the Afro fantastic. She is dynamic, she is diverse, and she is dedicated to creating the speculative futures where art, AI, and advocacy and learning are always accessible. And to that, I just want to thank you all for your time and attention as hopefully the next voice that you will hear is Dr. Natrice Gaskins. Thank you so much. Thanks Drin. Can people hear me all right? So I did turn my mike on. I think I did. But I have a script that's new. This is my new thing. Scripts. Okay. So title slide and I'll get started. Alondra Nelson at the White House. She's considered to be a first wave Apha futurist. She was doing a lot of academic writing when it was coined. She says that Apu is a term to describe analysis, criticism, and cultural production. A critical perspective that addresses many overlaps that include race and technology. And she says that Aputurism is not bound by a singular concept, definition or movement. It is clear and its advancement an open ended and ever evolving. So I met Alandra when I was in Atlanta, so different circles. But we all met at Princeton University in 2015. I believe it was called Ferguson Is the Future with Ruja Benjamin. And I remember Alandra was the keynote. And, you know, we were all in the audience with some pretty well known people, writers, Tenana Reeve, do you know everybody is in her? And she goes, yeah, so Afesian 1.0 is me and blah, blah, blah and these people. And then she goes Afesian 3.0 Dr. Gaskins, and everybody goes three point of people are still in the two point oh, that was ten years ago, almost nine years ago. So I'm going to talk a little bit about the kind general thing and how it relates to where we are today. This is a timeline. I'm going to go through some of the main points on the top part of the timeline, but on the bottom part is what we know in IT and communications in general. But along the way we have these sort of milestones in Apt. In terms of research, you may want to take note from the early 1900s all the way through the 2020s. And I'm going to touch on these things now, so just keep that in mind. It starts with 1905 with EB Debois Megascope, which was envisioned by Stacy Robinson on the left. Debois wrote this science fiction story about a black sociologist and it's called The Princess Steel. It was recently discovered, it was never really officially published. But he talks about this wearable technology called the Megoscope that could see across time and space. The story was written 45 plus years before terms like cybernetics, artificial intelligence, and virtual reality were coined 45 years. So just in terms of the speculative, you know, boys was already there in early 1900s and then along the way, like in 1966, we have Shaky the robot. So we have AI starting to come into the '50s, in the '60s and of course cyborgs and things like that into the present. But just the Afro futurism and the idea of speculative fiction began thinking about these things, addressing these issues long before these things happened. This is a little part of the text where he says carefully he adjusted it and this is Megoscope. When he raised the silken cords, what I now saw were head and eye and ear and handpieces. The things I touched seemed tremulous, alive, pulsing now. And said his hollow voice, the experiment begins, look, feel, see. So I took parts of that and put it in mid journey, and it came up with that larger image. But then here comes apple vision. So imagine that. Now we're able to the thing that the boys was writing about in 1905, and it's never published. So they're thinking it's 1905. It could have been earlier, is now it's Apple Vision Pro. How many years? A whole century. Now we finally have it. But he was writing about it in 1905, so I thought that was really cool. So this is Sun Ra. Sun Ra is considered to be proto Afrofuturist. He's the father grandfather Afrofuturism, Jazz maverick. I have some connection to Sun Ra through this project which is the Outer Space Visual Communicator or OVC, and the 1980s. And so is a giant machine that you play with hands and feet and allows artists to create and finger paint with light. Like how musicians create and explore sound with their instruments. The name of the instrument arose from a guy named Bill Sebastian, who's a friend of mine now, who collaborated with Sun Ra and made the thing in the late 1970s and early 819, '80s and computers at that time, this system, it was so big, had to be moved by trucks, so they did not tour with it, but they did perform in the early '80s. And this is one of the footage on the Visual Music System site that you can see them interacting with the OVC and there's no sound. I just quickly grabbed it so you can just see there's the website and where I grabbed the video from. Then it's me recording stopping. So I wanted to say this little story because it is kind of amazing. I was writing about when I discovered the OVC, I think it was From's book on Afro futurism piqued my interest. So I did a little research, then I started writing about it, and his daughter said, hey dad, someone is writing about you And he emailed me. And I was like, what? And it turned out he lived in Boston. Studio was in Boston because this is where this footage is shot. It was Massachusetts College of Art in 1980. He's from Boston. He was nearby. And at some point I worked a block from the studio, so I was there a lot. And he moved the OVC from this situation to using an Oculus Rift and called it VC 3.0 or OVC three D. I watched the development of that, Bill and then it goes deeper. Because Bill was a musician, he is a musician, he's still alive. And he was working with people like Arthur Baker, who's known for doing Planet Rock with Africa, Mbada, doing the Electro Funk in the '80s, and working with the Johnson crew. And if you don't know who they are, Michael Johnson is the inventor of boys, not boys, Men, a new addition and new kids on the block. So we have a connection from Sunra to new kids on the block, same people behind. And he was only short time he was with them until they found someone else. But the history is just amazing. Boston's role in after futurism, I didn't realize until I met Bill. But Bill had really worked closely with the Sri the orchestra, so it was really cool to connect there. This is also around the same time. This is Greg Tate, the late Greg Tate. He interviewed another proto Afrofuturist named Lz, early hip hop practitioner, and his studio was called Battle Station. It's described as a whirlwind two hour tour of the artist's capacity to conjure complex theorems that blur the lines between subway art race and cultural wars. Disease the language and illuminated calligraphy of the 14th century monks and sculptural high techonic weaponry. When I was a student here at Georgia Tech, they decided to take battle station. He had just died in 2010 when I came here. And they moved it to LA. And they had a big show. So I went from Atlanta to LA to see the show. To see Battle Station. He coined the term Gothic futurism, he claimed at the time, he's not an Afrofuturist. He has his own theory, his own concepts and his own framework that he worked from me, Museum of Fine Arts, Boston, fairly recently during the lockdown, so around 2021, this is a show that was a MFA, but it's a piece, this is gashelier avatar that he wore. And he had lots of things in the studio. But he had this really amazing, futuristic even though he doesn't says Afrofuturistic, very Afrofuturistic origin myth and work that he did. So this is Ramzi, all that's before the term afrofuturism was coined. Then we have it was coined in 1993 in this book, The Discourse of Cyberculture by Mark Dry, the article, the essay was called Black to the Future. And it was a debate between Y and black science fiction writers and cultural critics like Samuel Delaney, Greg, and Taro Y wrote that Afrofuturism treats black themes and concerns in the context of where we are at a specific moments in history. And generally as a signification that appropriates images of science, technology, engineering, art, mathematics in the future. And that's in this 1993. And he talked about Sunray, talked about Ramel, he talked about all that stuff. I read the book, that essay for the first time when I was a student at Georgia Tech and then found myself sitting across from Mark Y and the Bronx at a show for Black speculative fiction, talking about him being a white guy, coining Afrofuturism. And learned from him that he did not want to be known as an Afrofuturist. He was just trying to come up with a name to describe what he was seeing in black cyber culture at the time, in the 1990s. So he's a very nice guy. I'm not trying to be the leader, I don't want, I'm a writer, I'm a journalist. And so it was good to have lunch with the guy. You talk about that, that happened in 1993. And then Tasha Womack, she had her book, came out also when I was a student at Georgia Tech. And she lists multiple sub genres around Afrofuturism, including science fiction, historical fiction, speculative fiction, fantasy, afrocentricity, and magical realism with non Western beliefs and rituals. So we have John's a British filmmaker and writer. His fictional film, The Last Angel of History, follows the journey of a character called the data thief who travels across time and space in search of a cross royals where he makes an archaeological digs looking for fragments of history and technology and the search of code that holds a key to his future. And so, that is a film from 199063, years after three years of Mccoin coming out of the UK. So we talk about Afrofuturism within the United States, but then it's spread by this time. In just three years, people in UK and even in Africa were already thinking about the stuff too. Fs work and then there's connections to other things. Like there's a new futuristic film coming out of the UK this year called The Kitchen, and you haven't seen it on Netflix. It is definitely a mus se in terms of Afrofuturism, again from the UK, from Daniel Lua and co directed with one of his collaborators, The Boise's Megascope signified the second industrial revolution, which saw rapid scientific discovery, standardization, and mass production and industrialization. A comforus data thief saw a shift from mechanical and analog electronic technologies to the adoption and proliferation of digital computers. And now we have AI, or artificial intelligence, the technology behind the fourth industrial Revolution that has brought with it opportunities and challenges. That's where we are today, the fourth Industrial Revolution. Think about that. We're at the cusp of it. It's begun in the 2010. Modern Society reached a tipping point that was put forward by AI. Around that time, a Kenyan director created Pumsy. A film that takes place 35 years after World War Three and shows the main character testing the soil sample against data in a database. When she inhales the smell of the soil, she's plunged into a dream. One manifestation of AI is a Google tool based on dreaming that's going to come up again and dates back from early in the history of neural networks that's used to synthesize visual data. So we have palms and data, and then we got this thing where AI and data all happening around the same time. Deep dream, which I'll talk about came out of Google in 2014. The dreams inception, the film came out around that time. I was a student here, everybody was dreaming. So that became a big deal in terms of Apha futurism, the test, the AI was actually, I want to talk about this book first. In 2010, the book A Futurism 2.0 was published on the cusp of the AI revolution in 2015. In this book, I predicted the technological change that was to come. Chapter two in that book, A Futurism on Web 3.0 or the Semantic Web, supports dynamic graphical tools and networks for huge spaces of data. Ai referred to the latter as, as large language models. So I was thinking about that in 2015. And then I was also collaborating with IBM on an apt virtual three D space in second life in 2010. So that's a snapshot of my son Ra. They gave to give me virtual land to make an installation based on Aphafuturism That happened in 2010 as I was coming into Georgia Tech as a student. And then a few years later, the chapter two, realizing something was happening and that Aphfuturistic culture production could have a role in that development was happening by 2010 through 2015. Then now we are Stephanie Dinkins, the test of AI waters. A couple of years after 2015, 2017, Stephanie Dinkins. She had been engaging in conversations with a humanoid robot named Bena 48 that is supposed to be capable of independent thought and emotion. She also created a multigenerational memoir of a black American family to from the perspective of custom, deep learning, artificial intelligence. So on the left you have a 48 and Stephanie having a conversation. On the right is Stephanie's own AI, not the only one. And then here's me, my artwork, this is at the Smithsonian in 2021. Octavia Butler's typewriter and Stephanie's AI in the same space. We all came together the next wave after Octavia wave number three, really at the Smithsonian as part of the futures exhibition. And I had 11 portraits or 11 images in this exhibition. So it was really cool to be able to touch base with Stephanie Met and I already knew her by that time. If we look at AI in general, we're really talking about a small circle, the yellow circle. Generative AI or deep learning. Specifically, we're not talking about AI in general, talking about very, something very specific. And it's the application of algorithms to create new content using training models. We're not talking about other types of AI that's out there being very specific, deep learning, where I've been doing most of my research and work. It's a subset of AI, if you don't know that, that uses complex algorithms and deep neural networks to train models. More specifically, I've been using generative AI to create unique images that signify black culture in life. This is a time line. This is also the timeline of my interactions with AI as an artist. So the very first images that were created were in 2014 with generative adversarial networks. This is a timeline from 2014 to the present. If you don't know what Gans are, they generate random images as an example. Again, it's trained on images of cats and can random images of cats as a result. And next after that with Deep Dreams in 2015 out of that's Google. And it uses AI to find and enhance patterns and images through algorithms, thus creating a dream like appearance, reminiscent of a psychedelic experience that deliberately processes images neural style transfer. And that's where of the nine years I've been working with AI, that's where I spend most of my time. Until recently, prompt based AI didn't exist yet. Neural style transfer takes two images, a content image like a photo and a style reference image like a graphic fabric pattern. And blends them together so the output image looks like it's painted in the style of the style reference image. Then we have artificial intelligence, Creative adversarial network. Or Can. Or AI. Can. I know I can. I don't know. It's autonomous artist bought that, learns from existing styles and aesthetics and can generate innovative images of its own. It's 2017 and then text. The image tools such as Ali in Mid Journey uses AI to understand words and images and then convert them to unique images. And that's now on this other side, that's where all the trouble came in. Chat GPT, that's nobody cared, up until 2021. Now we got sag, Atra and the writers scale. Everybody's up in arms about AI because of that particular development in 2021. So this is my tool box, this is what I use, 2016-2021 I use a generative AI method called deep style or image style transfer, allows me to apply different styles to images I upload to create an diverse range of artistic representations. And this mode I can play around with vary settings to generate unique and interesting images. And then today I combine deep style with generative AI text image, especially using Mid Journey is my favorite tool that generates images from natural language descriptions called prompts. And I'm right on the cusp of getting funding to create my own model or customize a model on a different project, but kind of relate. So this is kind of what I So deep Dream is where I started, Dolly Mid Journey. And runway Runway ML stands for Machine Learning. So runway ML, text and image prompts a Photoshop and Illustrators to all Adobe Creative Suite tools now have generative AI built in so you really can't escape it. And then chat EPT of course and things like It programs like Canva, which is kind of K 12 for teachers now has AI generator built in that also is trying to have a little option so you don't have to use chat EVT because those are be used for students who obviously can't just use these other tools for really important reasons. So in 2021, Greg Tate, the artist writer and cultural critic I mentioned earlier, who helped usher in the first wave of Afro futurism passed away. And in honor of Greg, I use deep style and image editing to create his portrait on the left, shut, and put it on social media. Shortly after that, the Museum of Contemporary African Disperin Arts, or Kata, asked for permission to scale up the image and display it in Brooklyn as an outdoor mural. So what you're seeing is the original image and then how it looks in Brooklyn on the wall. It's on vinyl. So it took about 6 hours for them to put it up, and now we're talking about putting it in Dayton, Ohio, where Greg is from, and in Harlem. So it may actually be in two or three places beside that. So it's been there supposed to be for one year. In fact, it was supposed to be up for a couple of months and then it was up and then people were like, I saw it a year later. I'm like, It's still up Like I thought it was down so and then famous people would go by and do and then if I didn't know who they were, I got tagged, you know, some famous person standing in front. And so this guy who I don't know, but he's holding a new Tasha Wolmeck book and standing in front of the great tape girl. A lot of people see it every day. Didn't realize, even though my name is on it, they don't read the placard and oh, you did that. I see it every day. So, you know, I have former students. I took my girl on a date. I drove her by the, my teacher. Did that like did it help your date? Like what is going on? So this has been very, that's a really amazing thing when it goes out into the community to see how this stretches out into the world. But Greg Tate's family, his grandson, went to the school right next door to this building, so daughter was tagging me in photos of the grandson. I met his sister also met his brother, the family who lived in the neighborhood, so the family was involved. So it's still involved. I get invited to the big celebration at Lincoln Center with his band, Burnt Sugar Orchestra. And they projected this image on the back at some point, and the band got together for a reunion in front of this. And I came, I was also in town for Afrofuturism exhibition at Cardegie Hall, but I came to Brooklyn to be part of a historic moment with rectates band getting together for the first time since his death. Very emotional. A lot of tears but all standing in front of the mural. So that's I and what it can do in terms of impact. And this is the mural Eta. So by the time NFT's became popular, we're talking about in the last three years, I had amassed hundreds of AIG generated images, even before prompted tools had entered the mainstream. So for two years, I minted several of these images such as these while seeing my earlier prediction aim on Web Three Po become a reality Bitcoin and all that stuff that was happening around the same time I had all these images. A new aesthetic emerged in the innovation of the digital art form from machines learning visual concepts, and from artists that use different generative AI tools to combine different styles in one image. Ai is what I call generative AI is driving new understandings, expansions, and amplifications of modern art forms. The text to image process has evolved from using text prompts to calling up data from training models to remix art movements such as cubism to create unique work. In this slide shows the process from inputting prompts and calling up AI generated poetry or texts inspired by Afro futurism. I had this vision of a funky Afro diva androids and I wanted chatPT, write story about it, a poem about it, write a poem about funky Afro diva android. And it did it really good. So then I took the poem like parts of the poem and made, and then that's what Mi Journey came up with. I was taking the prompts from the poem and then terms prompts in mid journey and then created these images which are very popular in social media. This is a anatomy of a prompt. The generators run on prompts. Of course, the more layered a prompt is, the more unique the results will be. Artists can use their art knowledge to develop, refine, and optimize AI generated text prompts to ensure they are engaging to viewers. Being unique for me is important. I'm actually not trying to press a button, I'm actually trying to create a new static. And my prompts often include the subject, which is the what in the image. The style refers to characteristics that describe the artwork, such as color or even movements such as cubism. I have here in the style of Apha, futurism, cyborgs and robots. Composition refers to how the elements of the image should be combined or arranged. And it's all part of the prompt boosters or enhancers. Those are words that improve the quality or relevance of the prompt. It helps users create better prompts, making it easier to get the specific responses or outputs they're looking for. Really important in terms of making unique images and getting away from some of the dangers and some of the problems. And just letting the machine do it. Now I'm calling this visual storytelling 3.0 Everything is 3.0 It visual story and telling in general helps people make sense of complex data. It elicits emotion and increases information retention using prompts, modes and options in chat BT and mid journey version six because version six is bringing it. And that's now allows me to generate stories with corresponding visuals. In 2023, I had an idea to create a story about Afro Featurism 2.0 and AI from the point of view of a girl time traveler. This is my prompt to chat. Bt was 150 word abstract about Afrofuturism 2.0 and AI from the perspective of a girl time traveler from the future. And it gave me a cool story, just like before. I took pieces of the story and inserted them as prompts into mid journey. And then it gave me a, this is Aa. And Aa stands for Artificial Youth Algorithm. I was able to create basically a comic book about Aya and Aa's travels. From the story using mid journey. In the sprawling metropolis of Nia Akra, where a neon lights flicker and hover, cars zoom past skyscrapers lived a remarkable young woman named Aya. Aya was a girl, a dancer who effortlessly merged hip hop moves or futuristic twists, captivating audiences with her electrifying performances. But there was more to Aya than just her dance skills. She, she was a time traveler from the future. A rebel on the mission to rewrite history. Born in the world ravaged by corporate greed and social inequality, Aya belonged to a resistance movement known as Solync. Their objective was to restore justice and equality to a society consumed by oppressive systems. Remember this is Chat GPT writing the story. One fateful night, as Aya immersed herself in pulsating beats of a hidden underground hip hop club, She encountered a mysterious man named DJ, C, phase C, phase revealed to Aya that he possessed the ability to create temporal ripples through his beats. Together they realized the potential of their collaboration. A fusion of Aya's mesmerizing dance moves and C phase time bending soundscape. Imagine that. Visual storytelling 3.0 I really like instead of Jane on records is De Jane. I love that. And so it's kind of a call response between chat GVT and mid journey. Mid journey is produced. I give, I take it from chat GVT and put it in the mid journey and reinterpret it somewhat. It was initially neo Tokyo now is Neagana 'cause I'm like Tokyo. So no Okra 'cause you know, So I went in No, no. You know, that's not exactly it. But then most of it was really chat. Pt was great. I had a teenager who was very anti AI, read the story. It was like, oh, I would like to see that movie. I have a little feeling that I, you know, really got into the story. So yeah, so this is what it looks like inside behind the hood on Mid journey model because this stuff isn't free anymore. This is version six. The most recent version, what I love about it is remixing is one of the main chapters, main sections of my book. And there's a remix mode here, so there's language, maybe not even on purpose back behind, mid journey, that I can turn on when I'm generating my images, which I think is really cool, I don't. Okay, so this is in the last couple of weeks for a black history month. I started out the month of February with sci fi writers Octavia Butler and Gloria Naylor. I hadn't done a portrait of Gloria Naylor. What I loved is that, and Mama Dad was one of my favorite books as a young person. But it produced a portrait that looked like her, which I did have to sort of mold it, so the nose got molded, the cheekbones got changed. I added a dimple, because she had dimples, So I did Photoshop it a bit, but overall, it looked like Gloria Naylor, which I was very happy about. And then I decided to use the Pan option in mid journey to take a piece of a quote from the book and place Gloria Naylor in her story, Mama Day. So I took what's in bold here. She could walk through a lightning storm without being touched. Grab a bolt of lightning and the palm of her hand put it in there with the pan feature. And now the lightning again started here, added and using text from the book, that's the final image. It reflects the capability of the general I and ap futurism 2.0 as well as the knowledge of the human using it. I imagine the author of Mama Day in a scene from our own book. And the NAI tool gave me the ability to experiment, iterate, and explore different options. And I learn something new almost every day. This is visual storytelling 3.0 because it taps into the semantic web. Which is a web of data that's processed by machines such as generative AI to visually manifest some of the magical realism from Mama Day. Thank you, Natrice. That was amazing. And I had questions and you actually answered a lot of them. So that means you all get to ask questions. And I'm not going to hog this time. So let me ask, would people be comfortable coming up to the microphone here or would you prefer that I came and brought it to you? Sort of an open question. If anyone would be comfortable coming up to the microphone here? No, No one wants to. All right. Let me come and bring it around. Does anyone have any questions for Dr. Gaskins? Okay. Okay. Oh, yep. You guys, you're willing to come up to the microphone? Excellent. Thank you. All right. I just want to say hi Dr. Gaskins. I'm a big fan. I really enjoy your work. I'm actually a professor here teaching within architecture. And we're trying to work on using AI tools to envision more architectural future. So I definitely want to connect with you after I just want to throw that out there. But a question I have is obviously AI has seen different tools come to the surface. We're focusing more now on mid journey dolly and chat GBT. What do you think about the future of those tools? Where do you think we're headed with that? What future tools outside of those do you think will start to come into play? Yeah, I think so. I've found out that open AI, the mixed all they're willing for fee, for money to customize their models. And I love that idea, so I put in for a competition, but they're willing to give $100,000 to people to customize the database. This is a question put to me as a Phd student here at Georgia Tech from one of the professors was what happens when you build it from the ground up from people who don't normally, not the ones developing the technology. So that's what I want to explore is how do we develop our own not just tools, but the large language models that the tools are using to generate the images and the text. What happens when we develop that first? And it gets very dangerous because I doesn't talk about bias and stereotyping, but it even hits things like Dolly A paper came out recently in the last year, I think, where they used prompt portrait of a lawyer and it was all white men except for mid journey, mid journey was all men, it was all men. Mid journey. And doll, there was a little bit, but I tested. I did the same test. I went and tried all. But then when you do Portrait of a felon, Lee did something very interesting. Three apes and a black man. Three apes and a black man. Mid journey. When I typed in Robert Johnson, I was doing for a black history month last year. I was so disturbed I deleted it right away. But shouldn't, I should have probably saved it and actually reported it, but it gave me three versions of a blues musician and one ape blues musician. What is this between black men and apes? That's the danger when you don't have control of the model, because the model holds just as much biases as the humans who developed it. Until we develop our own models, then we're going to have those problems keep popping up, those biases and things keep popping up. So as a future project is not just using existing mods, but creating ones of our own. Not really like. Thank you so much. Thank you for the real exciting talk. I'm says I'm a professor here in mechanical engineering, the thing I'm working on, but as someone trained in mechanical engineering, it's something missing from our curriculum is how to bring art and society. Engineering is not separate from society, but we'd like to pretend it's frictionless in a vacuum. As someone who's an educator, someone who's really thought deeply about the connections between technology and society, I was curious. I'm asking you to do my job a little bit, but what are ways that I can bring these concepts into, you know, a class about physics or dynamics? I am trying by asking students to use AI to generate art, about engineering concepts, or find a historical figure and write about them like a fake interview, right? But I was curious how you've done this and how you've brought how you've tied technology and education together. So the week before Covid hit, I was in Los Angeles. I was in Long Beach in a middle school doing a workshop on bio mechanical engineering and Afrofuturism with middle school students. It was first of teacher professional development day. And then they watched me do it with the middle school students. In the classroom, the kids got different models of both speculative and real examples of cyborgs, people, parts of their body to having some sort bio mechanical invention. And then they had to prototype their own using found objects. And so they went through a whole process. And then underneath all of that I was using something that I pulled from hip hop culture in terms of design thinking, but from a cipher hip hop cipher point of view so that they were formed ciphers. And then they would do design thinking and mapping in those ciphers. And they would do peer reviews in the ciphers. And one of the things that I was noticing in this particular group, this is middle school students, the girls were just on fire. I got pictures of them. They're all like the boys were like. And there was like, You got to get up and do stuff with the girls I'm like, Why on the trip back to the airport She said, because he was you teaching it really I was like I just saw them just like they too, and I never see that. And I've done making electronics projects and things like that in middle school and high school students for a long time. And usually it's the girls who have to do extra work to get them more involved. But they were, and I was like, look at that, somehow it emboldened them. And they were. But the idea of the cipher is because the cipher is democratic. There's call and response. Everybody is equal. The teachers getting the cipher. The students getting the cipher. Everybody has a voice. We did the cipher with kids on an Art AI robotics class with Lester University where I work. We did it with high school students. There were kids in their mask and you couldn't hear them because it was 2030 people in the room and I didn't force it. But by the end of those couple of weeks, you always heard everybody. They got more and more confident, more and more comfortable, more and more feel like they were seen and that they had a voice and a place in the circle. So using these type of things, it's a way to bring them into the spaces where it's usually foreign. And they're usually not getting any representation, which is important. So now how do we do that with AI? And we're exploring that through this idea of building a large language model that's built, doing some of those techniques. But that's my answer to your question. Thank you so much. S Hello, I'm Jason. I work here at the library. Mine is really from a more personal perspective and I was wondering if I could get your thoughts. It's really interesting to me the way that you talk about this, so musically, right? And so you mentioned Sun Ra and you talk about it being a call in response between the models. And what you were just talking about with the cipher, I think is a great intro to what I was going to ask you is like I kind of wanted to know your thoughts about what strikes me about all this is the process is very similar, it looks like to like building tracks, building hip hop. Right. You go through existing culture, you piece sample together. Exactly, yeah. And you mentioned Africa Bumbada and you know, the craft work sample and stuff in that. So what are your thoughts on that? Is it just like a natural thing that we do, or is it that we're so influenced by that, that you know, this is how we're doing this. It's fascinating. I think the same culture they created, collage created the sample remixing came from, you know, when they started to take African art collage and going through grabbing from newspapers and posters and creating new compositions that are in museums all around the world. That's the remix. That's when people started chopping up stuff. Then Rome Bearden came in and did collages around black life. He had a big show at Emory when I was a student on remixing the Odyssey and collage. And it was at Emory when I was here at Georgia Tech. So I went to saw that show and I've been following beds, working. It has been heavily influencing my work since I was a teenager, since I was a high school student. So I've really been in tuned to what Romer Bearden was doing. So that chopping up is subversive. It's about throwing out the playbook disruptive fragments. Taking all these fragments of a broken, all that's the time that folks were in. But some of that, just like I showed, Mid Journey had the remix mode and the variation that's being built into the tools but not directly and not in a very intentional way just because it remixing comes out of remix culture. The developers who are working on these tools are thinking about remix culture when they're creating these tools. It shows up in some ways. I'm being very specific about it, this is where remixing is. And we can look at Grandmaster Flash, we can look at all the things in hip hop around the remix. And we can also look at improvisation, which comes out of jazz. There's also a research that shows they did MRI's on the brains of jazz musicians and rappers. And the same part of their brain is lit up, which is for language, For conversation, we talk about calling response. So we talk about that's a language thing, it's a communication thing. How does that play a role in how we develop technology? How does it play a role in how I interact with the technology? And I saw it as call in response to the point where sometimes a little suspicious, I put in a prompt and it gives me what I want almost too, like it knows I'm coming because I do this every day. So maybe I'm helping to train the AI, the model that I'm using, because I can evaluate, you know, that as well. So the ideas that I talk about, they're coming out of a culture that's been modern. And we talk about these industrial pods, now we're the fourth industrial revolution. And at the cusp of that, but we're also passing, it's a continuations, a continuum. It's not something that starts and stops, it just builds on the neck. So I think that's why we see there's some technical things that it's not like music sampling the Google and 20 right before the pandemic. I said, well, Google came up with a deep dream, and that's for visual images. But I bet they did something for music. And they did, it was called Cy. If you have the wherewithal and you have the resources, it gave you the recipe for building your cyn box. And it was a way of building AI, such that you can bring in different instruments in the studio and mix it. Just like you would mix a visual image. Instead, the piano and violin would merge and become a new instrument. Or it would get notes between the notes. So I got up to a point where I built it and that was right before I was ready to program lockdown. I don't have that particular stuff at home, like computer and all that stuff. To program it, it's still in a box, a toe box, waiting for me to do that, But that was Google's entry point for exploring that from the music side. So it's very much music, Definitely, it's a longer conversation about all these overlaps. But yeah, definitely there. So first of all, this has been really interesting. Especially like you've taken a lot of AI art in ways that I never really thought was possible. But it's interesting examining your process and how you go about things is different from a lot of the concerns I've heard about AI art and the things that we have to deal with with generative images and prompts and text. So how do you anticipate things will go moving forward with things like, there's so much concern about plagiarism, about art theft, all that kind of stuff. How do you anticipate things will change to address that moving forward? And how do you kind of address that in your own work? So when collage happened, you're chopping up other people's work, right? Other photographers work and the put together and create a composition. But collages, there's no attribution. Hm. Romer Baird and all these Picassos, these are all the artists I took from or photographers whose work I took to make this collage and their works are in major museums around the world. What the US government did is to create a fair use to protect artists who were remixing. Instead of a magazine or ten for a collage, we're using thousands of magazines. But is there a difference? Now, if I want to copy your work, that's different copy. That's been around long before AI and it will be long before after people will copy. What I'm talking about is a process. That process is very similar to collages in terms of the creating, using data from different places. The more unique the prompt, the more unique the output. And so that's a different thing, Most people aren't artists. So when I did a project with N, I did a portrait and I shared it because they wanted me to shared it for our Geneva Summit, whatever. So I shared it. Somebody said, oh, I'm not an artist, I can make that I said, whoa, good luck. I don't I mean, I'm not here to half a day past. And that person said, look, I came up with it but that's not what you does exactly. He was making a claim that he could do without being an artist, and I was making a claim, Good luck. Maybe you can, maybe you can. I don't know, but he couldn't. And of course, the fact that he came back and admitted, here's what I did, but it's not what you did, you know like it was the moment. Mm hmm. So I think there's two things happening. There's the acknowledgment that our artists who are artists trained and whatever are just in there are using genitive I and then there are a non artist who can't do it, but use a tool to try to figure it out. Mm hm. Now I, I troll people like I'll go and something that's hyper. I don't. And I'll go and look, oh, this looks just like a photo. No dumb the eyes down here. I'm always like nope, nope, nope. Look at the finger, the fingers splayed out, that no, that's not. I'm always the one the two reflections in there is not the same. Like I'm very like, if you want to make that claim, then now I'm going to come, I work with this stuff every day. So there's a AI literacy piece. Mm hm. Just like it is important to be an artist, it's also important to be literate. And this stuff. Now want to get to Atlanta, I for the first time ever at TSA, you got to put your face, there's AI in the TSA. They didn't need my ID or my passport or anything else or my boarding pass. I had to look into and had to verify my identity. And we know that there's been situations where black women have not hit the AI's failed. So what happens? Can I get on a plane now? If they're only going to take and it fails, then I'm stuck. Mm hm. So we know the dangers, we've seen all this stuff. Well, this is a reality, not just on the art side, it's everywhere. Well, where's the literacy? Where's the knowledge of looking at how to do this in an ethical way or in a way that, you know, I use my artwork, I use my sketches. It's fun, I threw it in there and then I share it with people and Chaka shows up. It was really good, you know, like, and then I freak out. But, you know, when Vernon Red decided to step onstage with me after my keynote, I was freaking out. My 19 year old person was like, oh my God, it's Ronnie Red. But he was like, I could tell she was an artist because all of her portraits, they all have five fingers. So she was doing something else on top of whatever because she never had misshapen fingers. That's how you could tell there's an artist behind the images. So we are at a credit point for all these issues, copyright bias and stereotypes. All those things are, you know, I talked about the apes and the mint. You know, when I did the test, that's the reality we have to confront. That's why I'm saying we need to build like our own large language models or even smaller language models first. Absolutely. And instead of just taking in the tools, because we still have the issues from the previous times it's built into the system and we haven't fixed that yet. So there's so many new things and things that we have to look at when developing this stuff. We can make the same mistakes we made before with all the other technologies where we can use what we learned to develop something that is actually helpful for humans and not hurtful to humans. Yeah, thank you so much for your perspective on ethics and everything else. I really appreciate it. Hi, I'm a student here. I'm a freshman and I was recommended to come here. We're taking a class on Octavia Butler Semester. I'm fascinated by the way that you've discussed your methodology and ethics around this, but I was you discussed the Sag Afra and all of that. I am a little worried about the young girls that you meet who will learn to write stories through AI instead of learning to write stories themselves. I was wondering if you had a rhetoric about the weird thing is how I came into this at all. I taught AP computer science principals to high school students in Boston. And they were all art students because high school for the visual performing arts, every student had an arts major. I was working with a I had to pitch to the college board curriculum for an arts based computer science curriculum that I hope kids wouldn't drop. This is an elective, it's a year long course, and an exam at the end because it's AP. Last week I got invited because right now the R Design AP College Board, it's blocking AI. But they're putting together an advisory board and they're like natric. We want you to be on the advisory board. I'm like, hey full circle taught it. Now I'm going to be on the advisory board to actually make policy decisions about how they're going to introduce AI, it's real. And they're acknowledging that they're recruiting people who work with AI, who are artists who can help them visualize all the issues that come into including copyright and IP for AP Computer, APR and design. And I'm sure it's going to hit other in the writing. On the humanities side, all the federal agencies are now funding projects with AI involved. I was a panelist for the National Endowment for the Humanities and looked at a bunch of applications. And I know people who are applying again for the new round. Ai is everywhere, but they're trying to do it in a smart way, which is US Copyright Office has a bunch of meetings I was part of. I was, I was one of the presenter to talk about my issues. Creative Commons had an open letter to Congress to fight for the rights of artists who use of AI I made. I said, why is my name at the top of the list I signed on, but I didn't sign on. I wasn't the first, I was at the end, but I'm right there. Right in the top of that. That went to Congress. And there's been media companies like Adobe paying people like me to come and meet and talk to them about their work. Microsoft tomorrow, I'm going to meta, you know, it's just that thing, everyone's trying to figure it all out. And I think right now we're at that point where if we get hold of it, we can deal with some of those issues of IP and copyright. We also got to not hurt the artists who are actually using it in creative ways. The non artists. I don't really care about them. I challenge them. A researchers asked me, what happens if someone's taking your work well, Good luck. You know, like that guy that tried it, I have very specific things I'm doing in my prompt. The prompt engineering thing is a job. It's a skill. There's things that I'm doing. That's why I talk about the anatomy of a prompt. What's the anatomy of a story in the new space? I'm not asking the same. What's posing for students? What's the anatomy of a story? What are the important points where the creativity is unique? So your story is different from the next, and the next, and the next, that's the conversation. We have tools, they're everywhere. But how we use them is the same as how we use all the stuff before. It's a new literacy, but we're not even teaching people the old literacy anymore. Thank you. So it's good to see you. I'm intrigued by a couple of things that you brought up. My area is hip hop studies and we've been doing a lot around the integration of AI for lyrical analysis. And so I'm encouraged by the, the technique that you're using to take information like from a poem and then dumping that into a mid journey and seeing what you're coming up with. So thank you for that. I wanted to know more about your thoughts on this hip hop inspired design thinking. I've been writing a lot about it and published a chapter in remix studies around ways in which we can be informed by the affordances of hip hop in design thinking. You mentioned earlier around how this goes back to collaging and the remixing and the sampling. And I'm interested to hear more about what you think. And how those affordances even can be located within African traditions. And how they, how that African past informs those African futures in the way in which we're utilizing AI for that, or way in which we're using AI to manipulate that. Then this intersection of AI and hip hop culture and music, I would be interested to hear what you have to say about that. Give me to be honest, and hip hop, the only AI we acknowledged, right at some point was Alan Iverson. That was AI, right? And so now there are ways in which these AI tools are being used to produce music, but there's that intellectual property and copyright issue of using folks voices and creating songs that aren't really them. So I just want to know your thoughts around the way in which hip hop is inspiring design thinking, but also the way in which those affordances can be located, even prior to be located within those African traditions. And I know I'm asking a lot, but you're here and I'm fascinated with your work, so I figured I'll go ahead and put all that out there. Yeah. You know, it's right here in this library, I found an essay that was so helpful to me. You know what? I'm going to talk about James not Tran's relative but the same last name. Right here in this library I found an essay about black repetition in culture production by James Need, I'll yell, A professor who died many years ago. Mm hm. And in it he spells out one of the first funk songs, Cold Sweat. And he writes it's an algorithm. Aba. Aba. And I'm like, wait a minute, there's computer science here. It was like a moment, it's an algorithm. So then I started plugging cold sweat. So I focused on cold sweat. I played in a visualizer. It produced African patterns. What those patterns are like? Cuba cloth and J's been Alabama quilts. Wait a minute. There's a polyrhythmic aspect that shows up. Oh, this is a real things just so you know, there's been research in African American quilts and lens the patterns, the poly rhythmic patterns to jazz, to hip hop and so on. So there's one aspect of it and then there's kind of a calm response in the cold sweat that was new when that song came out. So they would lay down the bass, the melody, the groove. And then James, he wouldn't even be there. James Brown would show up and the insert based on improvisation, whatever he wants, get a troublesome and like it was a calm response, pulling it from gospel, pulling it from all that, in that space to produce that song, the one we were talking about on the one they came at that same time, then Prince picked it up. And then, you know, like then they got sample, and sample, and sample, and sample, and sample, and sampled over and over and over again. We talk about voices. We, we were sampling other people's songs for years until it became you got to pay for it. But actually, if you think about it, people who had no resources in poor communities were creating songs and getting, creating careers for themselves and hip hop and rap music. But they were not musicians. Right? And then they could sample from music from all over the world. Well, once you had to pay for that, you couldn't afford it. But they weren't musicians, so then they're really relying on machines to produce sounds that actually are not very creative. Mm hm. So that's where we are today. But I'm just saying there's a comparison between how people use AI tools to generate the images, Now what we're doing with samples in the '80s and '90s as so called golden age of hip hop that has, It still feels very familiar. Yeah, yeah. And the legal issues for both. Yeah. Are very familiar. Yeah. Yeah. So I think the sampling someone was, you know, on the legal side of things. Actually explain to me how sampling is cool to compare. It's not actually the same because latent space is a whole nother thing. And we didn't have access to that in the music as you're working with the machines. But it's a different process, but it's still very similar to sampling. Yeah, yeah. So how does calling response because I've already figured out how I calling response with machines and other people. I'm really, you know, I did a little talk yesterday about that. But I'm on this thing, cipher AI, like I'm on this new thing about building from the ground up, bringing in our cultural practices. In the process of designing these models. Yeah, and that part comes from my experiences in hip hop and, you know, calling response in the church and all those things that James Brown brought when he came in. Uh, you know what the. Yeah. You know, listen to the groove. It was laid down and it was considered to be innovative. It was innovation that's black, technological creativity, innovation. Eventually machines were used in addition to the musicians to join in on that. And that's where Princeton, you know, came in with the Lyndrum and all that stuff. So Yeah. Thank you. I appreciate it. Yeah. Um, I just wanted to ask, so there's this, I think perception of this sort of AI based art that it's low value because it's easy quote. So I just wanted to ask if you would mind talking about your process and the amount of time that you put into your work and how you kind of approach a project when you want to get something done. Because I am deeply fascinated in what you're doing, and I'm just curious how that comes about. One of the things I'm really interested in is just seeing what it does right now, every week. Because just so people know, I've been producing the image using AI for up since 2019. Every day when I went to Johannesburg this time last year, every day, there was at least one image coming that I generated based on my experiences in South Africa. When I went to Brazil, there was an image every day. It was in the favelas was something I experienced someone I learned about a picture I took. That was so that's a really cool thing. But at the same time I will sometimes go and draw and then give it that as a prompt because I had a vision for how I wanted the output to happen. There's something that's very collaborative about that process, and so every image that I post is a different process. Sometimes it's one tool, sometimes I'm using three to produce that one image. Sometimes I use mid journey and then I bring it into deep dream generator. Do something, and I bring it in the Photoshop to extend the border or do something else or make the Facebook more realistic. That depends on what it calls for. But then there are some serious issues in terms of accessibility of these tools. Accessibility of Adobe, Creative Suite. Accessibility of mid journey. The mid journey is not free. Accessibility of even deepdream generator, which was free when I started using it, is no longer free there as, oh, isn't you know, like. And then there's a question for people who are students, who are entrepreneurs, who are just getting started about affording these things started when the pandemic started in 2020. I had 1,200 followers on Instagram now, and having 100,000 followers just from people spreading the word and then putting the work. And some of those people are just regular folks, some people are famous and freak me out when they appear out of nowhere. But the point is they're all looking and consuming the same information. And in some cases, fighting battles within the common sections about the originality of the work. I'm always challenging people to go further than what the tool does, because I feel like that's where the real good stuff comes from. Every time I make an image, it's with a different idea that I want to try out. And then I'll run for that for a week or two days and then I move on to something else. But the idea is to really see what this tool is doing. So what I'm doing now is I go and pick, used the same prompt I used in version 5.2 and see what it does in 6.0 And then when I did that, a new feature popped up that merges what I was doing in the overt deepdream generator that now I can bring those styles into my journey. So what happens if I bring in styles from deep dream generator, old school AI that I was doing in to midyear? I need to create something amazing I've never even thought of. It keeps going, it keeps evolving. That's the excitement of it. I'm not trying to stay in, I just say I have a traditional arts background. I went to art colleges. I have MF Studio fine arts degrees. That was my major in high school was visual arts. I know that part of it, but this is a new day. But I was also a computer graphics major when I was in those schools too. It was computer graphics. So I was already playing around with early Photoshop, trying to make it do what I do with my journey. Now I was trying to do in 1990, so now we fairly can do that. I was trying, I have slides and I can see the work and I know what I was trying to do then. I was collage was very much a part of the work that I was doing then on the traditional side of things. And I tried to do it digitally and I didn't hit a wall. But the technology was limited. It's not limited anymore. So there was a moment was in an Imax theater a few months ago and I saw my work appear on an Imax film. And I'm like, wow, so now it's like, and then I wrote an encyclopedia entry for AI Art. I used to read encyclopedias all the time. As a kid I'm an encyclopedia, It's all about the next thing. So once you get into a practice, just like any artist could do a practice with their tools, they're starting to explore new things every once in a while. If you don't, you stagnate. And maybe you're doing it because you're making money. But it's not as fun, it's not as engaging for me unless I can turn and try this, you know, style reference tool, feature mode or Pen mode, and add the, you know, Gloria Naylor book store. You know, like, it's all new exploration that I think keeps driving me and keeps me interested for eight years doing this work. That's fascinating. Thank you. Sure. So just so everyone knows, we're going to just take one last question. We have one last person up here that's perfect. And we'll wrap up at 12:15 Hi, Hi, super inspired in Afro futurism. I feel like in Latin America there's now things being born like and, and futurism, Pre Columbian futurism and other movements that are hoping that these types of movements do affect the future that we're taking. And I wanted to know if you do think that movements like Afro futurism do have an impact on the creation of the culture and where they see themselves. Does it change behavior? That's a good question. I think if you look at practitioners that consider practitioners Afrofreturism or subfretism for Ramel, it was his life, but that's extreme, like he's obsessed with his concepts. He wrapped himself around it and spoke in the language that nobody else understood. A few people understood. That's different. But there are a lot of kids, including kids that I've worked with in the past, didn't have futures. I remember one. And when I was doing upward bound at Roxbury Community College, and I had to do intake interviews, and one kid came in and he said, I'm going to be a basketball player I said, well what happens? You break your leg or are you break your D? I mean, that's a very real thing, You're athlete, right? But very few people make it. But when I asked them if you had any in their plans no, not at all. And then there was a moment of you could see it just I don't have a future. So what Aphroisom does is it gives people an idea to think about the future as opposed to letting other people move with the wind or whatever and maybe the conversation. Giving people an opportunity to speculate about who they are, what they will be being, what they'll do is giving them, just like I have a new idea that I want to try out with a generative AI tool that, hey, maybe I can try this out instead of not having any future at all, which is for a lot of young people still the case. I think it does change the way you think about your life. So when opportunities come, some people shut down and I can't do that. You know, and other people jump in. And that's what speculative fiction, speculative ideas do. I'm jumping in. I want to see if we can come up with some answers. But not enough of that is happening where it needs to happen in order for you to see a lot of change. But it needs to happen and hopefully with some of these things that are happening kind of in the spaces that are shocking, the establishment will actually lead to some real that young man comes and he has other ideas instead of just basketball player. Thank you. Thank you again so much, Natricea. This was really wonderful and thanks to all of you for being here. Let's give Dr. Gaskins a huge hand.