Hi everyone. Welcome to this week's dv of brown bag. I'm and Sullivan. I am a professor or assistant professor and digital media. And I am going to be moderating today. So if you have questions during the talk, please put them into the chat. But for now, I just want to get the talk going. So we're happy to welcome Nora, how old? She is an assistant professor in digital media, Georgia here at Georgia Tech, and she just joined us this fall. So we're super, super excited about that. And she completed her PhD at the School of Information at the University of California, Berkeley. And she was a member of the Biosand slab while she was there. And then before grad school, she worked as a human-centered designer and engineer in Singapore, Morocco, and China, as well as the MIT Media Lab, Intel Labs, Microsoft, and the echo. So now you can see why we were so excited to have her join us with all of her awesome experience and all the awesome projects that she does, which hopefully she will be talking about during this talk. So I am going to shut up and let Morris start. Hi everyone. I'm very excited to be here. They'd go ahead and start sharing my screen. And yes, and I'll be moderating the chat. So apologies, find this'll your comments as you go along. I'll really try and make sure we can get to that end. I am now around the assist professor and digital media today I'd like to talk with you all about bringing the promise and peril of emotion AI, lining for emotional meaning-making with data and imagining an affirmative biopolitics with data. It's kind of a lot since you feel so impersonal or BlueJeans events, all these videos, I'm rinsing, so I just wanted to share a little bit about myself. I mean, saying about my background, I did my undergrad at all and hearing, I worked as a software developer at the Echo Nest. They later got bought by Spotify. And then I was at Intel answered it. But now as a researcher says, get it. That means you need your lab and I got really into research one gram. But yeah, having worked as a software engineer and tried to understand as a designer, I kinda got this interdisciplinary perspective on both technical aspects of building technology and social aspects. Ai technology influences our social interaction. Here's our roadmap for today. First, I'll introduce the context of emotion AI and some applications. I'm going to argue that we need to imagine alternatives with emotion AI, data. It's built on data about human bodies and behaviors. My design research imagines alternatives via zeta and drawing emotional inferences via data. I explore with people made emotional meaning in, via data, what it via data supported affirmation over inside. Finally, I'll end with some questions about imagining and moving towards an affirmative biopolitics data. So what is emotion AI? Emotionally, I predict psychological characteristics via data. By, via data, I mean data about human bodies and behaviors. There are many biosensing technologies are out there, some are wearable, risk fans, they monitor it, maybe skin conductance and some sense people remotely such as cameras or microphones. Unfortunately, I analyses via data to try and infer psychological characteristics, especially categories of emotion. There's a lot of research behind all this. For example, Roz Picard's Affective Computing group at MIT and many different kinds of sensors and analysis that go into trying to infer emotions and other psychological characteristics using physiological signal. I'm particularly interested in how emotional I essentially our emotion AI technologies gotta fly, how they can reshape interactions, and how like any technology, they can help or harm people, and how they want that politics and values. Let's quickly look at some examples of real-world emotional emotion. Am, for example, to feel wristbands, planes to be the world's first in motion sensor and well-being advisor, decoding mental health and bringing objective zeta and measurement in the way we diagnose, manage, and care for mental health, including their homepage. I had the privilege of meeting a co-founders and they seem to have really good intentions of wanting to destigmatize mental illness, make therapy more accessible and affordable yet and also suspicious of the companies that field. So IMS layers this idea that we should be working to improve mental health because it will improve productivity and reduce health care costs by partially automated therapy. Also, as with a lot of emotion, AI, the underlying technology is planning to detect these discrete universal categories of the ocean that's being sent. Contexts and language. It's kind of a reductive way to represents the rich diversity of human emotional experiences. A few years ago, field is claiming they can automate cognitive behavioral therapy. By now, at least they've backed down a little and they include some video sessions. The human therapist, though they can help you to interpret your data. And I think this speaks to the importance of human interpretation of emotion. Not an isolated example. There are many other products that use emotion AI for mental health. Here are some other examples. Hirevue is an online hiring platform. As another example, job candidates team does a series online assessments, record video responses to interview questions. Hiring managers to watch this video later and utilize an algorithmic judgments like the job candidates, willingness to learn and personal stability. These judgments are based on data that the candidates facial movements and audio, their voice stove. I think the Electronic Privacy Information Center who filed the complaint against higher view, the FTC, for unfair and deceptive practices. And I think since then hired you back down and it's not using any kind of algorithmic inferences on the video interviews. As another example of emotion AI, there's some troubling project proposals from the Department of Homeland Security around to texting now and hence, especially at airports were unbalanced. Trends is defined as the, I'm an individual ending to cause harm to our citizens are infrastructure as if that's a discrete state that can be detected using physiological signals. Though I think the risk of false positives in this context is extremely upsetting. It can wrongfully label someone as a series or a terrorist. So let's look at what some other people are saying about emotionally. I are then Mariana, an Associate Professor of Computer Science at Princeton, recently described many emotion AI applications as snake oil floating in. Much of what's being sold as AI today is snake oil. It does not and cannot work. Why is this happening? How can we recognize flawed AAE clients and push back? He describes trying to use AI to predict social outcomes in the future, like trying to predict the future. As fundamentally do this. This includes some of the applications they just looked at like predicting future job performance or predicting terrorist risk. Emotionally, I relies on techniques of affective computing, which uses sensors and via data and algorithms to predict emotional categories. Though Rossbach, hi professor at MIT Media Lab, she literally wrote the book called affective computing. But she's also get very, she says the way that some of this technology is being used in places like China. Now where he's me so deeply that's causing me to pull back on a lot of the things that he could be dealing and trying to get the community. So think a little bit more about if we're going to those forward with that when we do it in a way that puts forward safeguards to protect. Though. Yeah, I'm very concerned about emotion AI or surveillance. And so there's Roz Picard. Yeah, So I mean, back when I started my dissertation, I feel like emotionally I was offered about kind of individual mental health applications that it's spread beyond that, they use or the charity and policing for monitoring student attention, teacher effectiveness, rating, hiring candidates, other applications. I'm really recommend AI now institutes or four at summarizing many existing applications and also derived forfend Marta Ahmed on emotion AI in China and how they aren't needed when used for surveillance, it violates their valence typically used since people without their knowledge in a way they can. But so in these cases, rather than representing emotion as a sort of general purpose category like happy or sad. The surveillance focus is typically mine, ranking people as better or worse, ranking teachers based on automated estimates of whether they're students who are paying attention for an ancient job candidates based on these automatic categories. Or judging whether an airline travelers safe or has terrors and intervention based on their physiological data. There are a lot more about value judgments. Some time equals via data can be collected, analyzed, interpreted without their knowledge or consent seized to make unfair, harmful judgments about them. Though expats with the Coalition for critical technology also point out that trying to infer psychological characteristics from physiology extends legacy of phrenology, racist pseudoscience that was used to justify inherent racial superiority and some, and detect criminality or inferiority or barbaric fibers as in others, based on physical characteristics. And there are a lot of ADH audience knows there's a lot of ongoing efforts to reduce bias, improve fairness and transparency. Duper important work. But still suspicious time and again, surveillance and particularly tends to reinforce patterns. Though. That's why I think we need to be imagining all three bids, the Via data in AI and especially with emotion AI. My work is inspired by Brown's call for a critical bio-metric consciousness. Brown Associate Professor of African and African Diaspora Studies at UT Austin and Research Director at critical surveillance inquiry connects biometrics surveillance with patterns of historical and ongoing oppression, especially axis of reasons. The explains. There's this notion that these technologies are infallible and objective and have a mathematical precision without error bias on the part of the computer programmers. Okay, I think anyone who's had to like write code or debug their code knows computer programmers make mistakes sometimes. But not everyone's a coder and society still scenes Marines and invest a great deal of trust and authority and technologies, especially AI, for, um, critiques over-confident claims to objectivity. And after I see the biometrics, surveillance technologies, all spread informs public debate around these technologies and their abilities. And accountability. By the state, by the SEC. Importantly, a critical bio-metric consciousness must acknowledge connections between contemporary biometric information technologies and our historical antecedents. Problems with biometrics, surveillance, bias algorithms, they're not isolated technical PFAS as people have been uncovering, annihilating. They tend to extend patterns of historical and ongoing systemic effect. In recent years. As the critical bio-metric consciousness is starting to grow and biases become a lot more well-known. For example, intersection of ice, even facial recognition. They spent a lot of our public debate and critique of technology companies data collection, some cities and eight advanced facial recognition technologies by the government. Yeah, in other cities, police departments, assertive using clear view AI, facial recognition, weights, ceiling accuracies and facial recognition for people with darker skin has legend around fluorescence, the blackman. There's like there's growing awareness here that there's also growing problems here. Though, what I'm wondering is, how can computing and design researchers engage a critical via venture consciousness directly. I don't think my work directly answers brands call for a critical bio-metric consciousness. I don't want to over claim, but I do think it's a really important source of inspiration for me. And I think this talk and have a broader influence in HCI. How can you computing and design researchers engage a critical bio-metric conscious listed as an HCI. I directly involved in building out the next giant biometrics surveillance infrastructure that's obviously intended to oppress people, were often working in a much smaller scale. We have a lot more control over the situation is make sure our participants, studies retreated loves the technology works equally well for everyone involved. We try to make sure our lab group interactions are fair and supportive and that's really important. That's a good baseline. Then how can we do more? As HCI research influences broader practice by tech companies and the government. What techniques and values are we promoting and or HCI research and how, and the techniques and values got adopted, adapted a lofted by others who are working at larger scales. Of course, this touches on HCI, privacy research, participatory design, community oriented HDI work. There's a lot of work already happening to resist and dismantle oppressive system. The severe instance sub-network overlaps with HCI. Great work at chi last year's office. In addition to that, I also see more opportunities for design researchers in particular to rethink their engagement via data and especially emotionally I, I imagining alternative features with bio data. Though I've been talking a lot about imagination on technology. So what is that? Well, sociotechnical imaginaries can help get a handle on thinking about the rule of the nice nation. It that says, Do you like that? Little imaginary is a socially and institutionally how shared vision and a desirable feature that includes not only technology that also shared ways of living that go along with that in our society imagines the future of technology in ways living with it influences the future. It influences what kinds of technology get fields, Let's startups get venture capital funding. But questions researchers asked my boss, policymakers and act, you know, what proposal got funded by the NSF potentially. For example, in anesthesia, the technology will make us more efficient, being more efficient system. Let's do a good thing. Another example, maybe this idea that the Internet will make us connected as part of a global community. This is one that I feel like I grew up with all this optimism and excitement about the Internet only daughter, her style of connection at home. But when I tried to run that one pass, my students last fall, they were pretty cynical about it. I'm not sure that's the technical imaginaries aged very well. Though I'd argue that maybe this esteemed emotion AI, to make reliable predictions, is in part bolstered by associates with me cool imagineering about the power of data. I might be charging the term, hear that? And it is helpful to consider the weight and that societal narrative, that data is this really authoritative way of knowing and making decisions. As you're tactful imaginaries operate at a societal scale. Those individuals in many different interpretations we might even disagree. My students didn't believe in the narrative of a positive global online community. Personally, I'm not always on board with the idea that being a more efficient, It's always good. And yet they still operate within a societal discourse that prevalent sociotechnical imaginaries a influence what research gets finding mainstream media and how to report some technology. What chances the shiny new innovation? Or what happy promises or a doomsday predictions people make with technology. And stories get told about how we might live the technology in each. And this leads trivially m prime questions such as, do you know, feel included in those stories of where technology is headed? What are the stories and flu? And that's left out because they operate it as societal level. Let's do technical imaginaries can be really hard to shift. So in light of some of the issues examined, the emotional bias and things surveillance and emotion AI. How can we, as designers, researchers in particular resist rework and re-imagine the knee. Though we'll have engine and professor of African-American studies and founding Director at the Ida B. Wells just data lab. I also keynote speaker at chi last year. Technology not only captivates our imagination, but is also part of discriminatory and mass incarceration in the US. He asked of n might be crafted justice oriented approach to seconds. For advice, Benjamin turns to Angela Davis's device on mass incarceration and abolition. And Davis says, dangerous limits and then place on the very possibility of imagining alternatives. These ideological limits have to be contested. They have to begin to think in different ways. Our future is at stake. Imagination can be really powerful, actually Vensim and ans person to define sociotechnical imaginaries both terms with the same quote. Imagination is no longer fantasy of the infinite masses. He's real work, it's elsewhere. No longer simple escape from a worlds if I principally via more concrete purpose of structures, no longer be pastime, that's not relevant to the lives of ordinary people. And no longer mere contemplation, irrelevant for new firms to desire and such activity. The United Nation is now central to all forms of agency, is itself a social fact and is the key component of the new billboard. Benjamin calls for liberatory, imagine, imagine and foster and felt alternatives. Though. Again, I want to reclaim out what I'm going to offer my own right here. I'm a design researcher. And one of the approaches that researchers use. I'm featuring and critically oriented design research, though. Zion featuring offers ways to imagine alternatives. I felt logy. Time teaching approaches such as speculative design, design fiction and others take to re-envision features and explore alternatives. So I coauthored this with Sandra. Cause of death is actually he did his PhD at Georgia Tech. And time featuring offers a way to invite discussion and reflection about alternative futures. Particularly oriented design research approaches such as speculative define critical design, discursive design and so on. Um, I intended to help people critically reflect on intubate, technology and societal institution. I think these approaches, it invites discussion and reflection about alternative futures. And in so doing, maybe they can help raise the critical bio-metric. Though. That's kinda my, I'm really, really kind of long journey on motivation. I'm very passionate about this and here and let's get into our design research in that tries to imagine alternatives. Isa that does some examples of the kind of nine featuring wreck I do before diving is our projects in more detail. Here's a quick overview. Much of my work looks at fostering open ended emotional interpretation with highly ambiguous via data displays. To along with collaborators, I've worked on fabric that generally gradually shifts colors in response to data and mealtime. Also detecting laughter from conversation, representing that it's white chocolate and keeps take soundbites, auto friendly, extended very projects. I started to get a handle on how he's very tangible and body emotional experiences with via data. In a sense, these are cutting edge real-time data displays that people aren't like looking at a time series graph, trying to find patterns. They're touching soft hand woven itself in between their fingers or looking at it. If our graph and in that range in conversation with their dad on the phone last week that I had a good laugh or noticing their friends shirt and asking them how they're feeling. So first we'll look at some of the prior work I've died on. What if people need emotional meaning of violating the law? Let collaborators, these changing fabric that create real-time data displays. This is different from typical screen-based displays. Instead of emitting light, a gradually changes color. It's slow, subtle, low resolution, and it looks and feels just like regular fabric. This video is that afflict many times it's very slow. Color change. Is IntelliJ changing fabric as a data display can potentially provide a very different experience with data and foster a very different interpretations of data compared to say screens. Though, building on that, I didn't elect him a t-shirt that you just color in response to the layers get productive. When the layer skin conductance spikes, all white rectangles gradually appeared. Del micro fluctuations in skin conductance are associated with various kinds of assignments that you're feeling stressed or happily excited. Break's able as you're giving a talk and feeling nervous, you might get sweaty palms. And that's it's thickened skin. Or if you're just having a great conversation with a friend and feeling happily engaged, you might also get an increase in skin theft. Though. This isn't inherently ambiguous. One of the Via data, it can indicate positive four and negative kinds of emotional excitement. Leveraging this ambiguity as a resource for design to invite open-ended interpretation. They call a changing pattern is intentionally Billy ambiguous response just like an inductance by gradually changing colors. So it indicates that any amendment of emotional excitement just happens, that at least the rest up for interpretation. The pairs of friends, worthy shirts, while having a conversation and socially interpreted. Via data display in a variety of ways such as ceilings and the vth year, embarrassed or passionate, well debating. As we saw before, a lot of work with via something tries to extract these signals of our trees. Date. For example, feel provide a few discrete categories of emotion. In contrast to that, I think inch provide something more like a bio queue or a social view. It seems socially and meaning is an emergent and contexts. And it can have multiple meanings. And above all, that meaning is interpreted and created by human. Attention. Interesting potential for ambiguous displays. This building on that, I need to be more robust and study them, the parents and friends wearing the shirts, they're actually, it is that their daily lives. It's called revolt because the color shading Fred's make a sort of ripple effect as they change color one by one. And not a major design challenge to ripple is that they wanted to study emotional interpretation, the context differences, and that's daily lives when they made the technology likely and revise through the printed circuit board. That's it. That's why with all Viking doing the dishes, petting the cat, perforating the display throughout many varied contexts of their daily lives. As expected, sometimes the display prompted pairs to reflect on their feelings together. For example, on the spouses participated in the study routes in and out of the country and they answered her display colors and fonts into conversation. Rishi attributed the change in her t-shirt display or stress about the upcoming live. And it prompted them to talk about how you can best respond to support guarantee that stressful time. In contrast to individual focused emotional via sensing technologies, ripple prompted social interpretation and social work. Yet I also have their tensions and writing in the study, participants compared their displays and one had been to change about the same amount over lunch, the same coupled notice that his shortcut teaching and hers did not change color. On the other hand, he complains that his display had been kind of changing colors, non-stop getting olive lunch. So I think this suggests a desire for their displays to change. And obviously my mouse, perhaps to support dozens is shared emotional experience over lunch. That et al series of questions to me about Hudson and nouns here anyway, as a designer, I tried really hard to move away from any identifiable the slant motion where silence and reds changing colors that you don't want ripples, highly ambiguous, like people are making comparisons about amounts of emotion. So at this point in the study, I started to get confused. And not attention that emerged from participant seems to map a lack of display response to a lack of emotion. Numbers incidence express concern that this shirt not responding might indicate that they weren't emotional as a person or that they weren't. In one case, the participant worried that she was broken and unfeeling dispenses assumptions, worry about why the display does not change the colors and interpret this as meaning they were not having feelings at that moment. That has opened up soon. And securities were they worried about how they should be feeling and believing that the display is really revealing their true feelings or lack thereof. Though. Why did they just dismissed those lays flat out wrong. Other participants. And they came back and told me to data display is totally unreliable. Didn't mean anything like what was this study really about? Honestly, yeah. Data that they felt able to disagree with and dismiss the data display and, and trust their own that feeling. The live event and these other participants seems to believe that the data display was showing the truth of their emotions, even when I felt wronged them. I think it might be related to the sociotechnical imaginaries about data as an authoritative way of knowing theta as somehow more trustworthy than our own feelings and our own bodies. The authority invested in. I think technology and the data produced by it makes it all the more important to carefully consider how the design of these technologies influences emotional interpretation. This is when I really started thinking about biopower as or what has the authority to produce knowledge about earrings, body's health and life. And biopolitics as kind of contestations are disagreements about pushing have that authority who were black, in this case, my via technology or data. Okay, let's take a step back and reflect on some of the ways that these designs reimagined emotionally. Instead of presenting via data on a screen by Adidas presents it as, but I'll tell HE thing every petal, color changes and fabric and clothing. Instead of precision, it's highly ambiguous to invite open-ended interpretation. Dead of an individual display. It's a invites social interpretation. Instead of treating data, digital or abstract, data is represented in a very material or an entangled with embodied in social contexts. And instead of seeking to detecting categories emotion, these displays mediate perception and participants decide the meaning of this is fine. Now coming out of that study, I was honestly pretty concerned about my own ethics as a design researcher because I felt like I just felt this thing and it made people feel really insecure. Though, I took a step back and just some soul searching. And then my next project I was like, Okay, what if via data supported affirmation over and say, I'm wanting to have kind of a more explicitly affirmative message. And also with this project, I shift away from thinking about individually war and variable bias and there's toward thinking about bias and to synthetic in the smart city to engage some of the concerns with emotion AI for surveillance that we were talking about earlier. They smart city, I mean that selection and narratives, proposals and ideas around using embedded sensors and data in a cityscape. Shared vision for the future needs to be thought of as a sociotechnical imaginaries. In the station as it's usually presented. With more sensors, we get more data, my insights and no more and more aspects of human behavior in daily life and study. And by understanding more clearly when transparently through these computational, there has to be made safer and we can help people be more productive. And mark that baby sounds good at first glance, right? Yeah, as we saw before when diode data as an old for surveillance, they can easily go array. Surveillance is not safety. Surveillance is not applied to everyone equally. There are many, many examples of seemingly objective data-driven insights that in fact is reinforce societal bias against marginalized groups. For example, as you mentioned earlier, three of you, they ie, facial recognition is used by some police departments to try to find some. Well established this face recognition technologies was accurate for people with darker skin and they're viewing hasn't reached around fluoresce of black people. Here they are. I love their motto, computer vision for a safer world is for some people and not for others, though. Really not the smart city I was hoping for. Reflecting on somebody's smart city visions. What I'm arguing for is we need to re-imagine what smart urban is, the means and the counter narratives that open up space for alternative values, designs and models this project, join so that others can taking up this fall and reimagining the role of sensors and data and this Red Sea. Island City Livingston film, I like that one. They'd eventually into the rocks by the sidewalk. You can just sit for awhile for free without paying any money, giving up any data. You don't have to exercise would be productive if you don't want to be, take a breather, flow down. This is the starting point for me thinking about affirmation as an alternative design goal, revise and think. I always his destiny he starts, but I think affirmation does happen in public space. Sometimes benches affirm it needs a fast uveitis and mass media stranger on the street provides directions. Maybe a bus driver stops me, see someone running. Of course, these daily affirmations of one's existence needs are unequally experience depending of privilege. So I draw a design early inspiration from these mundane moments that affirmation, while or meaning if I said about my social condition, push on them. The key reworking the smart city ideas here. And then instead of surveillance are sensing from above IPO adventitious C and since each other on the street level. And instead of emphasizing efficiency or exercise or activity, then she synthesized rest and slowing down. Recently I learned about this project. Frames here is not green Life. Project, Green Light into the surveillance program with gerunds, that's fema headquarters. So bringing share is not green lines is the counter campaign see an edge that conflation between surveillance, safety. It's helping people resist this security or surveillance mindset, which tends to make marginalized community members last day. Idea is to have community people out on the street. But I think we all have a sense that if your neighbors are hanging out on the street and he recognize each other, it might be a little bit. With all that in mind, do the work, the role of sensors and data and smart cities toward an experience of affirmation. I designed the hearts and such and amplifies the heart sounds, events sitters, and invites a peaceful moment of rest and listening. Though I really want to show you a demo. And we figured out that the best way to do this it is for me to stop sharing my slides and start sharing a video and Yuan's dead some legal letter when needed. Thank you. I'm a pretty unscripted demo video. I just invited a couple of places to try it out and that's what they did. Agriculture setup is super simple. The traditional medical grade stethoscope hooked up to the microphone, amplifier, then two speakers. The data's not saved, it's displayed sonically, not visually. Happy to talk shop offline, but I mean the technology is really nothing new. The focus of the project is the new embodied experiences with the data that this design IQ scores. And how that can help reconfigure the role of data in smart cities. I might increase the strangers to you to try. Together for social experiences of hearing one's own heart sounds and, and others, for many participants found listening to their own heart sounds to be a unique and compelling experience. And they found it off with the internet, but also nice to hear another person's heart sound. They often chatted, are listening together or just sat quietly list there Hudson. One of the most surprising findings that enriches many participants describe feeling a sense of shared life energy, including themselves, their study partner, and people across the world. There was one criticism and describe it. A nice reminder of what's pulsing through everybody. It's nice to be able to hear somebody else's, Archie's dismay. See that much more aware of the year if somebody else is living and breathing. And it's really sweet to me to hear that here's somebody else's life or think through them. She talks about appreciating, hearing your study partners heart sounds and takes that as a reminder of what's pulsing through everybody at every discipline similar and we talked about heart's beating together across the world or feeling more connected to the shares of being alive. Though. Let's get it right. It's factually obvious that she and her study partner both alive. We're not getting any data-driven insights here. It's more about taking a moment to appreciate that shared living and breathing. And that seems to be a positive experience curve. Any criticism? They're reflecting on his lack of insight from the heart. Sounds are sounds is the kind of data we're pretty opaque for many people. There's some opacity at work here. Do doctor, student at medical insights and stethoscopes that the audio, the heart sounds from the right, some fancy, it wasn't really suited for that. For most people, the heart sounds were paid. They didn't feel that they really gained any specific insight about the other person hearing the hearts. And yet there was this sense of connection and maybe people team to accept and appreciate one another just a little bit more. This is quite different than the role is then Susan data we saw earlier where the sensors and data were used to get clear insight. Instead of this clarity or transparency, they have opacity. Instead of using sensors and data to know humans more and more transparently to now understand categorized, analyzed every part of human behavior. The heart sounds bench, fairly opaque stream of data that doesn't result in inside or clarity. The opacity pushes back against transparency. This can help re-imagine the role of sensors and data in smart cities. With ASD can be a way to challenge and rework ways of knowing. And while we can or should claim to be able to know that people, three data estate layer, the work on the opacity of algorithms. I think that's great. And Mike, lot of critiquing the opacity of algorithms. That here I'm talking about something else about the opacity of humans. In thinking about opacity and these trends I'm drawing from police on postcolonial philosophers enriched me who advocated for field languages from his position at a French colony. We, I analyze this language is a way of knowing and the limits of translation. But I think his argument is also useful for thinking about how we try to understand people via data. So running this on here. If we examine the process of understanding people and ideas from the perspective of Western thought, we discover that its phases, this is the requirement for transparency. In order to understand this, except you have to measure your solidity that the ideal scale, providing you live grounds to make comparisons and perhaps judgments to reduce. I relate it to my dorm. I imagined existence, that advice. Data-driven categories, miles off and try to understand people in terms of measuring them against an ideal scale imbedded in the computational model that's make comparisons and judgments. Data-driven categories have a tendency to reduce things since a categorical norms and something that's lost in that translation. After outlining the promise of transparency, if this continues, they clamored for the right to opacity for everyone. The opacity pushes back against the transparency of this reductive way of knowing. It's much more respectful to acknowledge that we don't enchants understand about others, right? City visions, instead of pushing for this transparency, of trying to know and model everyone perfectly. But instead, use more of this acknowledgment of difference and dare depreciation across difference. They cover a lot of ground. Let's take a step back and synthesized how the heart sounds. Then she helps re-imagined smart city bias and gentlemen emphasis on efficiency. The heart sounds bench invites people to sit still at rest. Instead of surveillance they're seeing from above, There's this person to person sense of respect for intimacy. Inserted data analysis. It's about listening to the live sounds of your own and another spot. And instead have some clear cut actionable insight. What occurs is an experience of affirmation, massive emotional and say, something we already logically nato were alive. But this time with feeling actually tampered with acknowledgment of the opacity, the limitations of what we can really know about another person from their rights. Now, it's about appreciating someone without needing to understand them perfectly. To re-imagine the realists and Susan data in smart city, if I'm trying to rework things at every level of the design, from what chances data, what counts as ways of knowing too much confidence and faith. That will be time and dancing around here this whole time is that these issues of inequality and power, or via power and biopolitics. Biopower of who, who or what has the authority to produce knowledge about feelings, body's health and life and. Biopolitics, contestations about who or what should have that power. Critics that algorithmic systems argue that algorithmic systems have been granted too much power. They're treated with too much authority as ways of knowing the world. They suggest giving them less power with more regulations or through designing human in the loop systems for human interaction. With my work, I'm color changing fabric displays. I was unpleasantly surprised to find out how much authority, some switches if it's invested and that's via data display. They seem to trust the data to slight, even when it was aggravating their insecurity. And with emotional bio, sensing for surveillance and say Look, that surveillance is about seeing the unequal physician and destroying ecosystem. Emotionally I, for surveillance is being granted too much power, too much authority as a way of yielding insight about people's interest, psychology and emotional states. Emotion, AI and biometrics, surveillance disproportionately enhance marginalized people and can extend patterns of historical and ongoing systemic oppression. Throughout this talk, I've been trying as a design research to find ways that instead of being oppressive happen emotion, AI and emotional via something, a firm and support this diversity, richness and irreducible complexity of human experiences. Though. How might we imagine an affirmative biopolitics with data? My Lia mentioned alternative. How might we imagine an affirmative biopolitics and Theta? What is data firms, people's latent embodied emotional experiences? What is data-driven insights for contestable? It looked to me to teach incorrect and buyout. There's enriching work on the importance of contestability and algorithmic systems. What if data-driven insights, we're more humble, respect the complexity of human experiences and social differences. There's also enriching weren't calling for intrinsic humility and algorithmic insights. On my design, featuring and particularly oriented design research, health and matching ends at eight alternative features with emotional bias and emotion AI. How can design researchers working Revit Data help resist biometrics surveillance, tributes of elevating a critical bio-metric consciousness and foster liberatory and actually adults. We did a features via data. That's about it. I'm just ending on some really big question for discussion and thank you so much for coming. I'm going to stop sharing the screen. I can better see you all and hopefully, I should think that was a fellow. There is a lot of discussion in the chat. So obviously inspired a lot of people to think through things. I don't know if there's a way to save the chat, but there's some really interesting stuff in there. I did we did get a couple of questions. If anybody else has questions, please feel free to either post them in the chat or use the Q and a. But the first question is, is there any human override to hint if you're talking to someone on the shirt shows fear, anxiety. But you're thinking about an unrelated event. But you just have to defend yourself that the conversation isn't making you feel that way? Or could this be set back to a neutral color? Great question. Yeah. Hence and ripple are kind of two iterations of the same thing. So I think for hints, I'm not, they'll then inhaled Rights of Man and people wore it for about 45 minutes. Main kind of one lab they setting that then for recall, lights it builds in an overhead. Actually, I shouldn't have been out. They'll say they cancel over. I'd, I'd notes in a way that they could like teach the cholera and it would force it to change how they could like steak and emotional response to things that nobody used it across all of the studies. I don't like ask the knowledge you have to do with it. Nobody, How does it make it? They wanted to see what the real data was, which I thought was interesting. Yeah, exactly like I wanted people to be able to just deny that the data than anything. So he explains that the data also responds to a sheet. Though they can say, oh, it's warm in here or Oh, you did. The sun is shining on my shirt that she just the filler because something I kind of gloss over this time is that the color changes based on their map from X pigments, that a change in response to temperature and I, but I triggered them to change using conductive threads that as the temperature, you know, they generate heat, but they also totally to change in response to temperature herself at that, there could be any number of reasons why they were changing. So yeah, it was an incredibly ambiguous display. Her viewers on that paper, like you're displaying garbage like it. It's too ambiguous, doesn't mean anything. And I was like, Yeah, I agree It's super ambiguous, but people still thought it was really authoritative. Like what's up with that? I don't know. I kinda get to your question. Right. Awesome. All right. Another question that we have. And so I don't mean this pejorative Lee, how is this different from twice? And I ask for a little bit of clarification. And they're talking specifically about the heartbeat bench, which seems like a fun thing to do together for joy. Yeah. I think it could be great assets. Or F5, which is I mean toy designers like a little thing like I'm knock off to Detroit design. By that. I've also deployed something similar in a museum transplants so people could just listen to their own part sounds that it was kind of broadcast to the broader museum. It's definitely dances tiny and very playful, joyful Intervention. Kind of an emotional response to what I saw as the flight privatization of public space than trying to impose slight their valence. And I want it public space to feel more vibrant and enjoy, or maybe even playful on that, said Yeah, Also, I definitely don't think like that he should go installing a lot of heart sounds benches. I don't think that was really fix anything. Um, but it's, it's a, it's a toy and it's like a concept of like, Hey, what if we thought of this as data? What if this were the smart city technology that we cared about instead of their valence, criminalization and privatization of space. They, I think a lot of sometimes play, enjoy are the things to work with. All right, and I'm going through the chat and finding questions. So some of this is, like I said, there's a lot of discussion. So if I miss something that you meant as a question, please feel free to repeat it. So let's see. He says, I wonder how good the ground truth data, these systems are trained on art. This was back when you were talking about products that are using emotionally. I, I can imagine the individual differences and how emotions are expressed in ways sensors can detect, might make these really susceptible to false identifications of emotion. Yeah, on it. Yeah, yeah, huge issue. And most of AI is it's only as good as the training data. With a lot of these steps you have to be very careful, critical about what was it trained on. For example, like just talking about the facial recognition issue to get the money ends up his groundbreaking work on like, hey, there's intersectional racist. Thanks, advice and basically recognition. And a lot of that is due to the fact that the training datasets were not representative of different skin tones, et al. So then listen to the other companies. I don't specifically know. Like I clearly I claims they scrape the data all across the Internet and felt like they'd been called out a read aloud to use those photos that someone had on Facebook. That Yeah, totally. It's a really big part of the issue for the others to speak once. I can also look more into like they're treating them. And sometimes I wonder if they're trying to screen as they go along with more ease your data coming in. There is a follow-up question actually to the one about the hurt, the heart, since bench. Do you think the average user will understand the concept aspects of events? Yeah, I think that raises a really good points about and if the methods used in design research and actually really good IQ of the methods used in design research. Like why do our participants get out of trying out these weird things that we felt? Why did they choose to participate? What do they get out of it? Is that engagement like equitable? And to what extent are they coming along for the ride on trying out conceptual things are exploring and to what extent are they just like showing up to have a fund like half-hour where they try out something new. I think in that study it probably varied. A lot of people wanted to talk more about data and public space. But yeah, like when I ran the study, I didn't go in there saying, Hey, this is really a critical surveillance studies projects like I was more interested in probing the of us these people had. But beyond that individual study, I think it's like, I think that there's like really exciting new directions or maybe like things that HCI I should have been doing already and is just starting to figure out around bringing students back into kind of more of the overall concept. Bringing research results back to feel that sharing them with participants. Lords anymore participatory work where concepts are generated. That's awesome. But so we're running out of time. But there is one more question. And then another question just came in. So we'll do this one and then maybe if people stay later, we'll ask them. What does the overarching goal according to you of emotion AI, though, what according to you would be the ideal impact of such tech on human lives? Thanks for me, would be about using AI and Data and AI as a form of speciation and respectful engagement. And also it can be very creatively generated by unless it out bring it towards balanced. Well, if I understand that, that's kind of, I don't know, hotcakes, most of it, most of it, most engagement with AI is trying to get accurate insights about the truth. But for some of these questions there and like a human emotion and what people are going to do in the future. I don't think that possible. So perfectly predicts that. I don't know. That Yeah, I'm kind of in a weird space of critiquing my knee. I fairly heavily, but then also working with it. I should have a more overarching, inspiring goal at the end. Next time I do this and thanks for the question is after 120. So I know some people need to go. But I'm going to ask this last question, since there's only one more. So one of the more than a toy, would you consider the heartbeat bench to be an instrument? Like a musical instrument? The avenue a half played it as a musical instrument and I have made recordings through that. And yeah, it's like, yeah, it's essentially a feedback. It adds it flat blanket for the arms that it's prevented repeating back sometimes. So yeah, if you want to tackle lecture, please take experimental music like yes. But it's also a measurement instrument, which is like velocity information, velocity measurement, flood data stuff. That's another angle we could go out if you want to talk more about that later, too. Awesome. Well, thank you so much for your time and thanks everyone for the great questions. And if you have a chance, more ads definitely worth checking out the chat because people had some really interesting insights in length. So thank you everyone. And I will see you next week. Thanks everyone.