Hey everyone, welcome to the GPU brown bag. We're joined today by possibly up to 80 admitted MS-HCI students either joining the brown bag today or watching it later, recording later. We want to extend a big welcome to them. Actually encouraged Georgia Tech folks, if you are so inclined to put a welcome message in that, in the chat here on BlueJeans. Just a little background for those who are joining us for the first time. The brown bag is a seminar series that's been around since I'm GPU was started back in the early 990s. And this seminar is one that casts a wide that in a future speakers both from campus and alpha outside of Georgia Tech work in the area of how people interact with technology. And today we have a special treat because it is our admitted student day for MS. Hci program. We have eight current MS. Hci students are going to share their research. And you're going to see in the five projects that are presented, just a wide variety of domains and project types, wide variety of student backgrounds, but these all fall under the umbrella of HCI. And so briefly, the ground rules are each presentation is going to be around seven or eight minutes. We're gonna go one right into the next one. If you have any questions that come up about a particular presentation, just put those in chat. And then at the end, if assuming we have a little bit of time left, we'll use that time to answer questions. And so with that, I'm going to turn things over to Gabriel Britain, who is going to talk about his master's project on captioning group conversations on smart glasses for people who are deaf or hard of hearing. So take it away, Gabriel. Let's see, first time for new technology. When I do the classic thing and ask everybody can see my screen right? Cool. So like Dr. Hellman said, my name is Gabriel Britain. I am a second second year MS. HCI student. I I come from Fort Worth, Texas. I immediately before the MHC II program. Graduated from Texas A&M University in May 2020 year of the COVID with a bachelor's in computer science. Prior to coming to this program, I worked as a software engineering intern at State Farm and Google. And the reason that I came to the MSH2, MSH6 program in the first place was because I wanted to build technology that I thought was important and understand how people interact with technology. Specifically, I was interested in mobile and ubiquitous computing, which actually was the first-class that I enrolled in while I was here. I'm gonna be presenting a very, very high level overview of my master's project work today. So let's dive right in. Over the course of the past year, I've worked with Dr. sad Sterner, who is a faculty member here and a employee at Google. To understand how to present real-time captions on smart glasses to people who are deaf or hard of hearing. My problem, my project aims to answer questions like, do people who are deaf or hard of hearing prefer seeing captions anchored to people in the real-world, or do they prefer to see those captions directly in front of them all the time always in their vision. How can we help people who are deaf or hard of hearing realistically identify which person is speaking in a conversation, and how much do people who are DHHS get out of a conversation in this method? Full disclosure. I'm not the first person to ever come up with the idea of putting captions on smart glasses. There's a lot of research out there that does super cool work with advanced AR technology. It shows those n shows captions to people who are deaf or hard of hearing. The problem is, is that this research focuses on presenting captions that are really complex, which requires really complex devices that usually come out to extremely heavy headsets like the one that I'm holding in my hand. Not sure if you can see both at the same time. But this is the HoloLens to it weighs about £2. When you put it on your head, you can probably wear it for about 30 to 45 minutes before you start hurting your neck. And it gets really hot and heavy and it's pretty noticeable, right? One thing that's been really abundantly clear shown in previous research as well as my own, is that people who use these kinds of assistive technologies don't want to attract unwanted attention to themselves for using those technologies in the first place. So how can we show people captions on devices that are more discreet and lightweight, but they're inherently less powerful. So there's another problem with that. Most of these captioning experiences revolve around using artificial intelligence in some capacity. But using artificial intelligence is a really complicated task. From a technical standpoint, these beach to tech solutions that create these captions typically rely on AI. That's pretty good, but it's not perfect. So as a one man team, How am I going to be able to distinguish between people's frustrations with an AI that's not perfect at people's frustrations with my design. And the easy solution is just don't use the AI. Instead in my work, I pretty much script everything and make it feel real. My, a lot of my day to day is figuring out how to make these simulations feel as real as possible. And that, that requires scripting pretty much everything in group conversations, body language, caption, timing, it's all predetermined. In all of it is organized and orchestrated so that it feels real. So this is a picture of our prototype. I basically recorded and captioned a verbal conversation between four people, built a prototype on four different monitors that could roughly identify where a person was looking and then matched everything together to create a simulated group conversation that I had complete control over. I can change how the captions look. I can change how often they appear. I can change what size they are, what color they are. I have total and complete control. So this is roughly like holistically what it looks like. This is the full setup. This is a very generous model who has decided to spend some of his time modeling for us. And this is them observing these four people in their group conversation. For a little bit more detail. These are the smart glasses that we're using. There are modified pair of Google Glass with a small microchip on the side that powers our head tracking software. Notice that it is a little smaller than the device. It's held up this big heavy HoloLens too. And this is a screenshot of what the captions look like when you're wearing the glasses. One thing note, that doesn't translate very well to a PowerPoint is that everything that's black in this image is actually transparent on the glasses lens. So all you'll really see are these green, are these green letters in the bottom right-hand corner of your vision. So once we've built this whole prototype and we got everything ready, we tested our work with 12 people who are deaf or hard of hearing, showing them for different ways of present, of captioning a group conversation. And then we evaluated their opinion on how hard they thought the task was and their feedback for how to improve both the study and the appearance of the captions. Immediately, we got a lot of feedback on how things worked and what can be improved. So based off of their feedback, we've actually updated our experiment design and the prototype. And we're currently testing the prototype with more participants. And I just wanted to give a brief glimpse of one part of our new prototype, which is that the glass is look a lot better. This is another very generous model. We call them head. Head is modeling our second iteration of the Google Glass, which basically is a way more discrete frame. Gone are the wires and microchips and cones. And here are some regular Looking Glasses frames with a small strange electronic device on the front. It's still not as discrete as I'd like it to be, but I'm satisfied enough with it. For now. We are currently running tests with people who are deaf or hard of hearing over the next week. And we'll be doing data analysis over the course of the rest of the semester. So just some high-level takeaways. I've been working on this project for a year. You could argue that I started the Prado version of this project in spring 2021. And I've learned a lot. I've never done any form of accessibility work before. I'm very passionate about doing augmented reality technology. That's pretty much part of the reason that I came here at. Combining the two areas has been a very challenging experience, but also extremely satisfying. I've worked with a web, works closely with the population that completely unfamiliar with before. Like I said, I've never done accessibility research before. So interacting with populations who had completely different needs than my own and settings that I would have never anticipated being difficult for them was very enlightening experience. I'm also fairly certain that this work helped me land my full-time job, though I don't have any proof of that. Other things that I learned, I came from this program from a pretty technical background. I got my BS in computer science. And so I came in here with a lot of programming knowledge and not a whole lot of UX knowledge. But in order to build this prototype, I had to learn things that were technical outside of what I already knew and running into the iterative design process and do user studies, which was a lot of things to juggle at once. I also learned that hardware-based prototypes are very, very slow to iterate on, whereas software based prototypes are a little quicker to change the hardware risk prototypes require investment and time, whereas software, you can move stuff around a lot easier. I also learn how to collaborate with people who are in industry that are very, very well-versed in their domains, but are completely unfamiliar with yours. The last two things that I think are the most cool from working on this project was hearing somebody say out loud that if they could buy the prototype that they had been experiencing, that they would. So they said, I don't know how much money this could cost, but I would. It here and now if I, if I could, the other thing that I learned is that messing up is really part of the process. It's a lot bigger part than you'd like it to be. But through those failures, you learn and you iterate and your design improves. And with that, I'll hand it back to. Thanks so much Gabriel and those great. Let's go on to our facilitator facilitating self-management practices in type two diabetes patients with Jason and communi. Everyone hear me. Okay, We can start. Everyone. Welcome to our GPU brown bag. I'm Jason. I am a second year here at Georgia Tech and HCI program. I mean, the psychology track and specializing in UX research. I'm originally from the Bay Area in California. For my undergrad. I stayed in California once at UC Santa Barbara and graduated in 2020. While I was there, I majored in biopsychology, so I had the opportunity to work in a variety of different psychology labs. But I kinda wanted something more hands-on, more creative. And that's kind of how I transition to HCI into the UX industry. And that's a little bit of why I'm here. Hi. I'm caveat. I'm also a second year MSA, THE student. I'm in the interactive computing track and specializing in UX design. I'm also from the Bay Area in California and for my undergrad, I went to UC Berkeley. I came straight to Georgia Tech after 2020 when I graduated, I majored in cognitive science. So basically, I became really interested in how people think and how we can design technology to align with those needs. So basically it was just my passion for the intersection of people in technology. And I really wanted to apply this to a design space. So we're gonna be going over our project. But before that, we just kinda wanted to tell you about the four themes that we focus on while we're designing our projects and researching. We focus on empathy, innovation, teamwork and dedication. As you'll see, we really employed these methods in our research and design in our project. Excuse me, our master's project. We started it last March of 2021 and will be continuing it into build next month. It's a two-person project, Jason and he's the primary UX researcher and I'm the primary UX designer in this project. We're working under DR. Rosario. She is a professor here at Georgia Tech, very involved in the HCI community and the health space. So our project with Jason will be going into more depth in is self-management practices in low SES diabetes patients. So basically what we're trying to do is design a technological intervention for diabetes patients who come from underserved communities. Where basically doing this in partnership with Emory and greedy hospitals. They're both local hospitals that have served as the primary source of our research. Yeah. So I'm gonna go into a little bit of background on type two diabetes though. 10% of the US population suffers from diabetes. And of this 10%, about 90% suffer from type two diabetes. Specifically. Self-management practices such as having a healthy diet, getting physical exercise, are really important to kind of self-managing this condition. With regards to health literacy and diabetes patients, about 43% of the US is functionally illiterate, so they have trouble grasping kind of complex health, health information. And so this shows a lack of inclusiveness in medical resources for the general population of type two diabetes patients. Furthermore, the current diabetes apps that are out there. So the most popular self-management apps require fairly advanced health literacy skills. So whether that's out of complex wording or complex calculations. And so this kind of leads into our problem statement for our master's project. Which is how might we use technology to facilitate self-management practices? In type two diabetes patients who come from low socioeconomic backgrounds. We had two primary users and stakeholders in this project. Our first was health care professionals. So they served as the source of our research insights and kind of giving us information on the patients needed in terms of their self-management skills. And these came from Emory and Grady Hospital. And they spend different professions such as physicians, nutritionists, educators and diabetes. And our second user group with type two diabetes patients, of course, and these would be the primary users of our eventual design. We recruited them through Emory and Grady Hospital, through the ATPs health care professionals. Our process was pretty user-centered design. It was very conventional. The HCI process that we've learned since the beginning of the program. We began with research. We conducted a literature review of existing papers on diabetes and self-management in communities that have low health literacy. We did, we then did a competitive analysis on existing applications that serve to bring users for self-management and diabetes. And then we also conducted semi-structured interviews with our patients. We then went on to ideation. So we came up with these concepts and designs that we tested with healthcare professionals, which took us to the actual design. This involves not only wireframing and prototyping, but also creating a design system that would prove to be accessible for our vulnerable population. And finally, what we're doing is evaluating our mid fidelity prototype at the time. And so I'm just gonna go over a few of the research findings that we had. So the first one is health care professionals self-management, that we found that continuous follow-up and positive reinforcement is very important for these patients who come from these, from these low SES communities. Our second finding had to do with the patient challenges. So we found that patients don't fully grasp the full impact that diabetes has on their lives. And kind of like the consequences of not self managing their diabetes. And finally, for our last finding which the current technology shortcomings. So we looked at the most popular three or four diabetes apps on the App Store. And we found that they are to information heavy and require a fairly in-depth knowledge of technology and health information. As I mentioned, after this research phase, we did go on to ideation before design. But for time sake, we'll just kind of take you through a few of the screens that we've designed based on our research insights just to kinda give you the context for what we're designing. So keep in mind that these screens have been designed for a very specific population that requires basic knowledge and very simplistic screens that just abstract the most important features onto an application. So here are a few screens you can see. And what we really emphasized here, our accessibility, so accessible colors. The design principle of simple reinforcement. In terms of visual design, we've emphasized large fonts and icons and simple Broderick. So instead of saying something more complicated than medicine, just saying medicine or exercise. Then also adhering to these conventional standards and applications. Some other things are visual representations, giving help and documentation to users throughout the entire journey through the application. Also, Jason had mentioned that a lot of existing applications are very complex. They have a lot of features that not many people even use at the end of the day. So we abstracted just the main important ones and based application on those. Yes, so we had a bunch of kind of takeaways from this project In the interests of time. I'm only going to go over one that we thought was important. That was creating a project timeline with clear goals is very important. Especially since this project was very big and very long, it was crucial to have goals each week or each month. In order to get to a place at the end of the project where you're kinda proud of the product you created. The other learnings which I won't go in depth or is that recruitment is hard and just see continuous feedback throughout the entire project cycle. So you can probably imagine that we had a lot of gains in this project, just working with such a vulnerable population and unique user group. But I think the biggest Our about this project was that we were able to finally fulfill our passion. So going into the program, we both knew that health was the field we wanted to focus on. And by starting our general and going more specific into our specified field, we were able to really conduct research and design in a field that was very rewarding for us. So it kinda just shows HCI program and how well you can cater the program to your needs and desires. Again, I won't go into the rest of them just for time sake. But yeah, you can look at them later if you want. So what's next for us? We are both seeking full-time design UX research positions. So I'm seeking UX research positions and copies seeking Fulton product design positions. And we both hope to work in a healthy work environment with room to grow. As we kinda start out our UX journeys. In terms of non-work life, we were both hoping to travel more since this post-COVID and get new hobbies as we start the full-time life and of course, give back to the HCI community. Thank you so much for attending here today. We really appreciate you coming out to support us and feel free to reach out to either JSON or me at anytime for any sort of advice? Yeah. Thank you. Thank you. Thank you guys so much. That was great. And I especially appreciated your last bullet about giving back to the HCI community. That's something we tried to emphasize here and I think it's good to hear that. Okay. So these first two projects were master's projects. Our next project, her heart with a ana and tomorrow are from the project core security. Bruce and I are doing this semester. And so it's a shorter term project than the first two projects that you saw in the last two projects that you thought. So I'll let you guys take it away. Okay, who's gonna go ahead and share my screen really quickly? All right, So hello everyone. I'm Ayana and then spin partner. We are currently in a course that is CSEA, three HCI projects studio, where we work with industry partners on the project. And for this one we are working with an iPad Shri Georgia CTSA, and a conglomerate of other Georgia universities in the area on a project called her heart. Alright, so with our team. So again, I'm Yana, but everyone calls me. Yeah. Yeah. I'm a first-year and this HCI student in the IC track. I did my undergrad at Clemson and I gotta be as in Computer Engineering. And I came to Georgia Tech because I wanted to do more the design room, unless the cutting room. And really focus on making design and technology more feasible and friendly for everyone, making more accessible. And my name is Samira. I'm a first year MFA shy student as well, and I'm on the industrial design track. My background is industrial design. And I came to this program because I wanted to learn more about the technology side of things and figure out how I could merge technology with more physical products and create engaging experiences, particularly in edtech space and things of that nature. These are our industry partners that we're working with. So we worked most closely with Georgia, CTSA, and Emery as well. Friends, problems, space, we represent with her heart tool. So what it basically is is a translation of the healthy heart score that was originally created by Harvard. And what it does is it calculates the risk of cardiovascular disease, specifically aimed towards African-American female teams. And it also gives some lifestyle recommendations that way as they grow more into their adulthood, they increase their risk of cardiovascular disease. This problem space is really important because cardiovascular disease Kimberly sneak up on you if you're not very cognizant of everything going on with it. It is something that can easily be prevented during the transition from adolescence to adulthood. And so it's really good to catch it early on and you create healthy habits. And the reason why we're trying to specifically aimed at black female teams is because black women are the ones that are most affected by cardiovascular disease. And whenever I study was done with over 300 young women, only 10% of them knew that it was the number one leading cause of death for women. And they also weren't sure how they could figure out if they are at risk. So we want to try and help them understand and calculate the risk. Yeah, like we were saying, this is a picture of what the Harvard healthy heart score it looks like it might give you. It doesn't really good calculated rundown of everything that goes into the risk of cardiovascular disease. So we are just wanting to make it a little bit more friendly and feasible and accessible for young black teenagers. So whenever we started this design process, we've really tackled in a very straightforward manner that's been taught to us since he got to the program which is discover, ideate, create and test. And so what they discovered, we started off with just getting acclimated with the project because it had already been existing project before we hopped onto it with Georgia, CTSA, and Emory. And so we got some background information and we did some research and competitive analysis before we johnson into brainstorming with the rest of the team and doing some participatory design activities with teenagers in the area. So yeah, like we're saying, with the background research, it was already done before as an image just to kind of understand how teens already view the healthy heart score done by Harvard and to learn their motivations for carrying out a healthy lifestyle. And so we just looked at the. Existing interview notes that have already been recorded and synthesized and we went through and just analyze it and did some quick affinity maps. And so we could point out a couple of things that we wanted to focus on. So we found that the healthy hearts or that was already existing, it was a little too tricky. The questions weren't fully laid out in a very comfortable language that teams could understand. And so they weren't as confident with their answers, but they were still interested in learning their theorists because this does apply to their health and we would like to be on top of it as possible. So once we understood that, we want to make a quiz that is a little bit more comfortable for teenagers, something that's engaging and easier for them to understand. We started looking at a lot of different trendy quizzes that are really popular on my teenagers, like the binary Briggs numbers in a gram and also BuzzFeed quizzes. We decided to really focus on BuzzFeed quizzes because they are really fun, engaging, pretty addicted. You can just go down a rabbit hole of just hours of the most random BuzzFeed quizzes. But we liked the aspects of it being very visual and focusing on one question at a time. And it's in a very casual, uncomfortable language that's very friendly and understanding for teenagers. And after we had done this competitive analysis, we presented our findings to our industry partners and we had a group brainstorming session with the CTSA industry partners. As part of this session, we did like three different methods. So we spent five minutes on each of these I degeneration that I did picking. So you can see the different colors, sticky notes, those are individual person's ideas. And then we use stickers, the kind of place them on the board to say which ideas we'd like the best and which ones we want it to do. And the ideas dealing face, the eye distally face was taking someone else, say do that. You didn't come up with yourself. And then kind of quickly mocking up a design using pen and paper pictures off the Internet age that we could do within ten minutes or so. And then we shared or ideas. The goal of this was they come up with ideas that we integrate into the current quiz style that would be more child-friendly and Buzzfeed style. And the rightmost images you can kinda see an example of using images of food as the answer options to represent what are the different serving sizes that we were asking them about. And then we also thought that the author options were often overwhelming. We could split them up using like a yes-no kind of sort of I I love fruit. I don't like fruit type of thing. Then you can see here how this directly impacted the quiz design. We realize pretty quickly on as you're trying to implement these, certain food groups can't be presented with and just pictures, especially like fibers and high-fiber items and low fiber items. So we decided to use teen friendly answer options. And you can see here how we've changed the format of the quiz to be more friendly. We have photo examples of what foods are wasn't glue in the group and then kind of teen friendly language that we've also used to reinforce good habits. If they say like these and eat them frequently. And then let's kind of like encouragement to try it or to improve in this area if they say they don't really agree with those categories. We also got feedback on these peer evaluations from our classmates and sessions of the classes as a part of and also a participatory design session with teens where we presented them the original tool and these designs as well and got their feedback and also lead them in a brainstorming session where they presented us with their own ideas that they thought would be great or that they would like to see in the tool. Some of these ideas directly impacted particularly the summary page, which tells them and report on their score, and then also what factors contributed to that score. So we changed it from this image of something dangerous that we had initially to a heart. That many more direct way to your heart health communicates that your risk is slightly elevated in the future based on your diet not being super great and maybe your activity levels could be a little bit better to improve those. But this image was a better representation of that. Then I'm just going to quickly talk a little bit about our user evaluations. So we talked about you just so far and we're doing these evaluations and a hospital clinic. So we're going in and then we're doing semi-structured interviews after we have them go through the tools. So we ask them if they're open to testing our tool and then we do a quick review afterwards to get their feedback and see what they like and don't like. This is where we're at currently. And then we have a few reflections. Based on this. We think we've learned that user feedback is really important for evaluation of design decisions, the iterations because we have our findings. And then as we're containment theory on the size, it can get overwhelming, confusing on how we got there. So making sure we're documenting that feedback and getting a good amount of feedback on each step to really validate this is you're making a super important teams require more appropriate than adults to get more feedback. So we have to have more probing questions. Make sure you're really open-ended and be okay with asking them to elaborate on there. I bet they're saying a bit more. In the hospitals to resetting requires brief and efficient evaluation methods. It is now allowed to interrupt the flowers. We have to wait for them to do their paperwork paperwork, and then we have to also getting our evaluation before they're called back to complete the testing. And let's thank you. Alright, Thanks. Thanks so much for a really interesting project. We're going to move on now to possibly be an opposite project, which is when a group of three students or students working with Netflix on their master's project. So DTC hash and it's all yours. Some great. Gonna see my screen. Okay, great. So we'll get started. So hi everyone. Welcome to G view brown bag. Where team Netflix and we're here to talk about our master's project that's focused on creating engagement around entertainment content beyond streaming. Before we dive into our project, Let's start off with a few quick introductions. My name is ebp and I'm a second-year MSA TI student in the psychology track. I have a background in psychology and neuroscience, and I apply to this program because the opportunity that it presented to use rigorous research to influence technological products. And after graduating from the program in May, I will be working as a UX researcher at Vanguard. And I'll pass it onto ni hao for her introduction. Thanks for that. The hail, I'm there. I'm also a second year in the program and I'm in the IC track. Before coming to the program, I did my undergrad in biochemistry and computer science. Then I worked as an engineer for a couple of years, But really want it to be more people centric in my work. So here I'm, here I am the program. And in this project, I was also kind of focusing on the UX research side of things and will be a full-time UX researcher after graduation. Thanks now, hi everyone. I'm sorry. Gosh. I'm also a second year children in the IC drugs. Before coming to Georgia Tech, I did my bachelor's in electronics engineering from India. And then I came for preparing industrial design at Georgia Tech. I wanted to learn a lot more about how products are made and what is the thought that goes behind them. And also design things from a user-centric perspective. That's why I'm here. And once I've finished my graduation in May I be in Boston working as a UX designer format. You can take forward. Awesome. Thanks guys. A quick note on our advisory panel were being advised by our Program Director, Dr. Hanuman, as well as the senior UX researcher and a product designer from Netflix. On the key components of our problem statement are creating engagement around entertainment content in a way that allows people to connect with the content itself, as well as with like-minded friends and family. We chose this problem specifically because during our short preliminary research phase, we didn't find any competitors in the space. Also, this problem hadn't been well researched. So we thought that there would be great scope for research and innovation here. So moving on to our timeline. Here, we've placed our timeline against the fabric double diamond to illustrate our process. So we completed the discover and define phases of our project through practice and research in the first semester or the fall semester of 2021. And this semester, we're mainly focused on the development and delivery of our prototype. And with that, let's dive into the research phase. So first we conducted a literature review that comprised of journal articles, blogs, and web articles with the goal of understanding prior research in the shared entertainment space. Some quick takeaways from our literature review where that people wanted to experience shows in a social setting. And they also wanted customizability over when they view something with others versus when they want to view something by themselves. So we took insights from the literature reviews and came up with our interview protocol. We conducted interviews with 13 Netflix users and some insights from the interviews. We found that people were very interested in learning about the creative decisions that were made in the creation of the show or the movie. And once again, the social interaction component was very important in elevating a shared viewing experience. And with that, I'll pass it onto CF. Awesome, Thank you. So once we did all this research and we got a lot of insights on the users. We converted those insights into design ideas, which we then put in the elements versus feasibility matrix. As you can see here. This matrix helped us in organizing our thoughts and focusing on one particular direction to take ahead. From that matrix, we arranged that again, end-to-end engagement journey map, which helped us to look at what are the things that the users can do before, during, and after a premier phase. So whenever Netflix comes up with a new show and they want to primary, we wanted to know how are people engaging with it. Based on those, we came up with nine ideas in total amongst the three of us. And then we again went back to our research and we looked at what other things that we can work with and what are the code needs that the users had. When I was nine ideas to top three or four ideas. Further, we had a conversation with a Netflix stakeholders to get a viewpoint from an industry perspective whether what type of concepts or products will be feasible in a real marketplace. And that's how we came down to two or five ideas exited. Sometimes. The concept testing phase is when we tried to get a lot of testing of the product than before the actual product was made, before we put efforts and to actually designing something. So at this particular stage, we use storyboards to give the users some contexts on what the final solution could look like so that they can visualize it. And through this particular phase, we administered a few key elements that we wanted to keep in mind when we came up with our final solution. The first thing is that the users wanted a very unique interaction and demand, which is more immersive experience album, just scrolling through articles or blogs. The second thing that the users wanted was to be able to socialize with others this particular solution. And we'll users also wanted to maintain the tangibility of an in-person experience. But in a more virtual, virtual world, do and often they're hard to talk a bit more about this. I didn't think so, yes. Yeah, so apparent in synthesizing the insights from our concept tests, we then translate those into specific design directions to take us forward in the project. So we really wanted to make it an immersive and interactive space that users can use to interact with trending, trending show content as they get excited for new season premiere. So for our project, as a proof-of-concept, we decided to choose the Shoebridge written to kind of build our prototype around. Basically because it's a huge show for Netflix and also had an upcoming from year that we could source a lot of great content around. So to build this immersive and interactive modality, we explored Mozilla hubs, which is basically an open-source 3D rendering software that allows us to create interactive rooms and spaces that users can maneuver through. So as we design this space, we consulted hubs expert as well as an architect to really help us inform a floor plan for the interactive gallery that we had planned to create. And that's what you see here on the right. So once you finalize the floor plan, we then embedded specific elements such as these in trailers, cast and crew interviews, and interactive trivia and behind the scenes videos and other relevant articles across the hubs room. And then we just let users navigate in that space and showed them content in a more interactive way. With that, we then conducted some feedback sessions. The goal really with these feedback sessions at this mid fidelity space and our project was to really understand the ease of navigation for users within the hub space and identify any elements within the exhibit that drive the greatest engagement. And also just understanding the effect, the efficacy of the prototype in helping users get excited about Into of Richardson. And with this phase, we also wanted to understand if the interface helps users get into that more social interaction setting that we talked about in our insights. So knowing that and synthesising all the feedback from the sessions that we're conducting. We are now looking towards next steps in the project. Next slide. Yeah, so our next steps for the project where it really were already synthesized, all of these insights into specific action items to create our high-fidelity, high-fidelity mock-up. But that's kinda where we're at right now. But I just wanted to take a moment to quickly reflect on the three of us have learned in this project. So one of the biggest things I think all of us have learned with just how to really work with an industry partner that has a very specific focus area. So a lot of us have created apps or worked on apps that are more utility focused. But this is one of the first times doing research in the entertainment space. And that was a very unique and novel challenge for our team. So we really learn how to balance what our team goals were and what our individual goals were with the business goals that Netflix had for this project. So a lot of stakeholder management and just figuring out a balance between how to create a product that is very innovative but as also feasible. So I'll kinda stop there and thank you all so much for your time. Thank you. Great. Thank you so much. Nice job. And we have one more presentation about otters with Josh Terry. Again, a very different contexts. If under design process. Very takeaway patch. Below. That was an awesome little talk, held it on your projects. Yeah. So hi, I'm Josh. This is soft or sea otter foraging tech. This is my Master's project. I M and MS. Hci students on the LMC track. I graduated from the Computational Media Program undergrad here. Most of my professional experience isn't the games industry. I worked with Adult Swim and aqui para. I'm currently an associate producer without Bukhara and an undergrad. I competed and let us club and I have a dog. So first off, I want to start with some early research on my project which mostly consists of Literature Review, Interviews, a lo-fi prototyping, and databases based on all of my research activities. So before we get started, it's important to understand our users for this project rather than the traditional human-computer interaction. I am working in a science field called animal computer interaction. While I do have the human component in here as well, the focus of this project has been on designing for users that don't have thumbs or access to language. Southern Sea Otters are cognitively on par with dogs. They need enrichment just like dogs, so they need cognitive challenges to stay in good health. And previous research suggests that floating enrichment isn't as good as sinking enrichment. And for marine mammals, they've also got a high metabolism, so they're super food motivated. So some previous research from Hera burn involves instrumented dog toys that is, including a little barometer inside of a silicone ball so that when the dog chews it, you can see how the dog interacts with this toy. To design that she realized that some dogs evolved skills to tugs involves certain skills to interact with their environment somehow. So some of those things are fighting or tagging, chewing nose, touching. These are all behaviors that we can leverage to let these animals interact with computational interfaces. So there was an existing background for dogs, but not for sea otters. And so I'm totally obsessed with otters. So I figured, hey, how can we leverage similar technology to measure the health of these animals too? So I went ahead and interviewed a bunch of trainers at Georgia Aquarium. I found that they were interested in health tracking, health care and training. And they perceived the need for passive health tracking, enrichment and exhibit design in their habitats that Georgia Aquarium. So with that data, I decided that an instrument in toy or some sort of computer-driven enrichment would be a great way to improve the health of their orders. So early design involved like this football sized nylon guy, right there were a bunch of computer, if that's inside of it, a bunch of moving parts, super-complicated. This was totally spit bulb. I didn't really have any real data driving this design, so I decided, okay, before I go super hard making this really in-depth projects that may or may not work. Maybe I should do some lo-fi prototyping first. I started with some real simple little bits of PVC or a jolly ball with some felt or a little dog toy that I cut a hole into. And I hid food in those different boys that I made for the otters. And they were all somewhat familiar to the orders, but they had all really unique modes of interaction. So that, that orange toy that might look like, Oh, well, there's cramped sticking out of it so I should reach them there. But really to open the toy up, the animal would have to press it open. Similarly with the PVC toy that we'd have to unscrew it or with a jolly ball they would have to untie and not to get the food, which the naturalistic behavior, to just smash stuff open or bite it. But that wouldn't work with any of these four designs. So I was curious the, OH, what other sorts of interactions can we design for? So I made some graphs that are like I would have require a real long-winded explanation to describe what's going on in these graphs. But ultimately we found the kelp toy. There's more stuff going on in the graph here. They were more interested in the kelp toy and they spend more time interacting with it. So yes, they are capable of interacting with toys in a way that isn't just smashing or biting stuff open. They were able to untie it or not. Some of them were even able to unscrew the lid. So that was pretty, pretty insightful and really helped or my designs for later on in the project, so it informed my design constraints. So more recent research this semester has involved a dashboard prototype for the trainers, UX evaluations of those dashboards, and some functional prototypes of this toy that is hopefully going to be in the water with otters Wednesday next week. Wish me luck. For this dashboard prototype. I wanted to convey some important information to the trainers. And I spent lots of time mocking those up pen and paper back to the drawing board. Then on Figma, back-and-forth until I honed in on a design that I really liked. I decided to allow vet the, allowed the trainers to track the health marketers using this dashboard by uploading data they got from this instrument and toy. And if the toy detects any changes in behavior, the trainers can submit a vet report. So it's all in one ecosystem. If you design constraints of this where it should be usable and back-of-house, so it needs to be waterproof. They should be able to remove it from the wall and carry it around. They even have the iPad right over here on my wall. And that was a whole process. So this dashboard alone, beyond just the toy, beyond just interacting with the animals. I interviewed the trainers on this. Lots of really good insights for how it can be better, how I might change it. Also based on the earlier design constraints I got for the toy design itself. I made a first 3D print of the toy out of some PLA. This is just 3D printed. And that would not at all survive being bitten by an animal that has an £80 per square inch bite force. So I went back to the drawing board. I realized from this design, okay, I'm not gonna be able to make this waterproof. How can I make it white? So I decided that rather I decided with the help of Noah Posner, Thank you. It'd be a good idea to make a waterproof electronics package to slot into another toy. So I went ahead back to the drawing board, once again, designed another toy and I'm waiting to machine some parts out of either, UH, and W or LDP. Those are fun names for certain plastics. I'm gonna go ahead and make some FDA compliant components machine out of different types of plastic for the outside shell of this toy, so it'll be soft enough to not damage the habitat or the otters teeth, but also hard enough to survive being messed with. Funny, I have no idea what I'm doing dog is here because this is all new to me. I have very little experience too, hardware before this, but I haven't been able to inform all these design decisions and design constraints with what I've learned from the HCI program. So while I've also been taking a crash course in material science or computer engineering, mechanical engineering. I've done a little bit of everything to this project. It's been a really great way for me to explore and learn all those things that I've always been really interested in. So some next steps on this project. I already have the electronics. I've already 3D printed this little electronics package. And next I'm going to machine and outside shell for the toy. And I'm going to cut a fancy little gasket for the toy itself to keep it waterproof. So beyond that, I have some more future work. Next steps for machine Michelle is testing the toy in the water, iteratively, testing and improving the toy depending on how many more testing sessions I have left this semester and hopefully writing a paper for the ACI conference this year. And here's a funny little image. I asked, manufacturing machine shop nearby machines, some of those parts for me and their only response was that we are in aerospace design company and we don't do projects like this. I'm sorry. That's said I've since found some shops to handle the parts for me, so that's no big deal. Anyway, that's my project. I am really hoping to turn it into a business or go off to some startup incubator. Thinking about the whole PhD thing we will see. Thanks so much. Feel free to reach out and my email's there, it's Josh Terry at God's act.edu and my portfolio is Josh Terry, dot tech. So that's it from me to you, ****. Thanks. Okay, great. Thanks Josh. Fantastic project, really we've really covered a lot of distance in the five projects that were covered today. And I think we are officially out of time. So we won't have time for questions, but I want to thank all of our presenters. And I also want to acknowledge all of our MS. Hci. And we'll look forward to seeing all of you in our gathered Lee platform after this, our presenters. If you want to reach out to them with questions, I know that there'll be more than happy to answer those. So thank you-all very much and enjoy the rest of your day. Take care. Thanks.