That's Ts. Alright. He, hello. Can both hear me, okay? Welcome, welcome, welcome. Yeah. Alright, so we're going to get started with our speaker today. Great turnout. Hopefully everyone got some delicious food in the bellies. It's kind of cold and rainy, so, you know, here we are all together in the safe place to learn some really cool things today about intelligent textiles. First, I want to do some quick Admin announcements. So if we can get a show of hands for first four credit students, if you can raise your hands. Okay. Excellent. And now put your hands down. Other students if you can raise your hands. Excellent. And then everyone else. Interested guests. That would be me. Very good. And Yamin Excellent. Oh, The interested guest table is over there. Got memo. So, for a credit students, you must sign in by QR code or manually. And everyone, just, you know, please turn off your phones. Otherwise, I'll find you, obviously, and then toss your trash before leaving so we can keep this beautifully newly renovated space clean and awesome. So it's my pleasure to introduce EUA Luo, assistant Professor at University of Washington, ECE. Recently started like four months ago? September. In September. She got her PhD degree in EECS from MIT in 2024, Bachelor's in Materials Science and Engineering from UIUC in 2017. She has a lot of really interesting work around digital fabrication, human computer, robot interaction, apply Day. We'll learn a lot about that today. She has a very impressive set of publications, including in places like Nature Communications and Nature electronics, which I can tell you from personal, painful experience is very hard to get a paper in. A number of best paper honorable mentions, futured in Media, and listed as a 30 under 30 for North America in 2024. So I'm really excited to welcome her here and to learn a lot about her work. So let's join me and welcome her. T hank you for the kind introduction, and thank you so much for joining me today. I'm super excited here to share my work on intelligent textiles for physical interactions. So humans engage in a wide variety of daily activities by constantly interact with the environment physically. And such physical interactions embed and convey tons of information. We might have taken for granted, but let's imagine like at this moment, most of you are sitting on a chair. And how would you adjust your body posture and balance while you are sitting on a chair? The tactile sensory system through our body would be able to capture the real time contact between your body and the furniture, feed the data to your brain, which process the information and actuate the corresponding muscles, and therefore adjust your body posture and balance. And my research goal is to extend such capabilities beyond our body to a variable intelligent assistant, which will be able to record, model and augment the physical interactions with sensors, AI agents, and also embedded actuators. And here's a more illustrated example on my research vision. So here shows a 96-year-old lady who wants to maintain a safe, happy and healthy life. Imagine she dressed in such arable intelligent assistant, which enables real time recordings of the real time physical interactions, including vibrations, pressure distributions, muscle activities, and so on. And such data enables monitoring of her health condition. And enables analysis for disease diagnosis and so on. And more importantly, such data would enable the generation of external feedback back to the users to augment her capabilities. And this feedback includes some haptic feedback to guide her through the daily exercises routine, or some motorized actuation, to directly facilitate her for the complex task, for example, stretching for the very tall cabinet here. And beyond such contexts, recording, modeling, and augmentation of physical interactions are extremely important for diverse areas, including health care, education, ARVR experiences, robotics, and so on. But this is challenging because physical interactions are pervasive and diverse. They happen throughout our human body at extended durations under diverse using scenarios and also subjectively perceived by each individual. How how do we efficiently scale up the interfaces so that they can capture information spanning our whole body from our hands to our torso. How do we similssly integrate such functionality in our daily life so that it can serve us at extended durations without any obstruction to our daily activities? How do we ensure the robust performance of the sensing and actuation interface so that it's not affected by the diverse daily activities in our life? How do we make sure such systems will be able to adaptive to different individuals to ensure performance despite different people will have very different sensation and reaction to the same physical interactions? We are proposing to address such research questions by using intelligent textiles for physical interactions? By leveraging the combined power of digital textile manufacturing techniques and the sky rocketing AI technology? First of all, why intelligent textiles. Let's say similar integration of technology and scalability in terms of coverage area are two contradictory design parameters for such physical interaction interfaces. I compared with other localized smart wearables, such as smart watches, epidermal electronics, and other digital fabricated soft systems, Intelligent textiles obtain two main advantages. One is that it's one of the very few platform that has direct contact with our human body, spending a large area at extended durations. We wear garments every day, and even though when we are sleeping, we keep constantly interact with our garments and the bed sheet and so on. And thanks to the long history of textile manufacturing, it's very compatible with the mass fabrication. Save a lot of manual labor and make it readily for mass production in manufacturing contexts. Back in the 90s, researchers already presented intelligent textiles by putting off the shelf electronics on top of the fabric. In that case, they use textile as a pure passive substrate. Then later, researchers realized the unique property of textiles and starting to similarsly integrate functionalities in textiles through manual fabrication and design, which however still limits its fabrication efficiency, sensor density, and its design space. Only recently, thanks to advances in computational design and fabrication, material science, and so on, new opportunities arise in such seamlessly integrated, scalable and customized integration of functionalities in textiles. And our research has been leveraging such opportunities to develop intelligent textile system to record model and augment physical interactions. It has been driven by three different directions. First, we'll be able to integrate functionalities in textiles through digital design and fabrication. So it overcomes the restrictions of traditional manual fabrication, enabling integrations of thousands of functional units in a low cost and customized manner. And leveraging such and this would address the pervasive nature of physical interactions. And leveraging such scalable sensing and actuation interfaces, we'll be able to capture tons of data featuring diverse physical human environment interactions. And such data, it's super important for the data hungry but powerful machine learning and data processing techniques. And this will address the diverse nature of physical interactions. And by combining advances in both hardware design, and such data and modeling process, we'll be able to develop new technology, especially for applications featuring human, which is healthcare, as well as for robotics applications. In the past, we have been doing projects featuring each of this component, as well as the combination of three. But in today's talk, I will focus on three individual projects, featuring on the recording, modeling, and augmentation of tactile interactions, which is a very representative modality for physical human environment interactions. And I will start by talking about recording and modeling of tactile interactions using digital machine added full stized tactile sensing garments. So this garments embed more than thousands of sensors, and you will be able to capture the continuous pressure distributions in the pixelized format. And such high sensor density and coverage area was not feasible with traditionally manual fabrication, and such large density also enables unprecedented large data sets featuring on such physical human environment interactions, which opens up opportunities for leveraging machine learning techniques for classification and regression task. The tactile sensing garments, it's enabled by customized peso resistive fiber, will be able to code conductive stainless steel thread with pesel resistive no composites. And when we align to cox Peso resistive fiber orthogonally, it will form a sensor at the intersection. Where it obtain a structure of a peso resistive layer, it's sandwiched by two conductive electrodes. When you apply pressure, the structure of the peso resistive layer will change, and therefore, the semiconductive nanoparticles inside of there interacts more dramatically, and therefore, the resistance will drop as well. And this allow us to convert pressure information into electrical signal. And after we have our customized setup also enables generation of thousands of meters of such functional fibers automatically so that we can generate like spools of functional fibers without manual labor. And then after that, we leverage the power of digital machine meting. So digital machine meting is a industrial scale textile manufacturing techniques, enabling users to computationally design and automatically fabricate a textiles, generating fabrics with complex shapes, geometries, and color works in one machine run. And we'll be able to apply such functional fibers on this machine so that it can integrate it in fabric and textiles automatically. And the generated fabric is soft, stretchable, and flexible, just like our regular garments, and we'll be able to form a large pressure sensing matrix by overlaying two digital machined fabric, one with horizontally integrated fiber, and another with a vertically integrated functional fiber. And compare with other tactile sensing interface, our full body tactile sensing garments obtain advantages in terms of scalability, customization, as well as sensor density. Here we demonstrated full size tactile sensing, glove, sock, vest, and a fully conformal robot arm sleeve, conformally feeding to its large curvature and wear geometry. However, as you would imagine, as we are increasing the sensor density and coverage area, challenge arise from the variations among each individual sensor. Typically, with only one single sensor in the lab testing scenarios, we can easily extract such characterization curve. This allows us to have a one to one mapping between the electrical readout and the quantitative pressure information. But in our case, because we have such high density of sensors and the textiles are so flexible and moving around all the time, the characterization of the sensor in real world deployment might be super different from the characterizations in the lab. Also, there might be malfunction sensor after a certain time of usage. It's just impractical to calibrate more than 1,000 sensor one by one, using the traditional characterization approach. And with that, we would like to take inspirations from how human body work. When humans experience a local loss of sensations, our brain gradually learn and adapts to such loss of information and learn to leverage in the spatial information to compensate for such loss of tactile information. Given such inspirations, we would like to propose learning based correction pipeline for our scalable sensing interface. And more specifically, we ask participants to wear the glove and perform diverse pressing on a digital scale. And intuitively, the sum of the tactile signal should always have a linear correlation with the digital scale reading. And beyond that, here, when we look at one single tactile frame with a missing sensing unit and shorting of sensors, we'll be able to identify such problem by looking at its temporal tactile information, as well as sorry, spatial information and temporal information. And by combining all of this information, we'll be able to develop a weekly self supervised sensing correction pipeline, where we takes in a sequence of tactile frames and output one single tactile frame represented the corrected tactile signal. The model was weekly supervised by the ground shoe forces, which is captured by our digital scale, as well as self supervised by the spatial and contemporary consistency of the tactile signal. And our pipeline learns to remove artifacts and generate more uniform and continuous responses, enabling a robust sensing capability without the perfect fabrication and perfect usage scenarios. And with that, we'll be able to capture a large datasets featuring on diverse physical human environment interactions, including performing different activities over the furnitures, doing different kind of daily movement and so on. How do we extract information from such tactile datasets? Again, here is some of our intuition. When we look at one single tactile frame, it embeds continuous pressure distributions in the pixil format, just like how images will capture visual information. The time series of such tactile images is very similar to videos. Based on such observations, we leverage convolutional neural network to deal with the high dimensionality and complexity of our tactile data sets. It demonstrates a diverse set of downstream applications. For example, we'll be able to perform pose estimation by just looking at the tactile sequences captured by a pair of socks. We'll be able to develop convolution neural network, which takes in tactile sequences from both of the socks, and output the 23 joint angles representing the full body posts. The model was trained as a regression problem by minimizing the mean square arrow between the ground sooth and predicted joint angles. I learns to make predictions that are both smooth and consistent over time, and all the predictions are done without any visual information, only looking at the tactile sequences from a pair of socks. Therefore, it enables our system to be robust against visual occlusion or limited field of view. As another example, we'll be able to sorry, we'll be able to capture Diverse tactile the video **** play. I don't know why it's not playing, sorry. It should be able to capture diverse tactile imprints when the user is doing different kind of sitting posts on the furniture. And when we project the tactile sequences into a lower space through TSN, we find that the tactile sequences featuring on different sitting posts form distinctive cluster in the two space. And therefore, we also achieve a classification accuracy up to 9%, indicating the discriminative power of our tactile data sets. And our textile based interface also enable integrations of hundreds of sensors in a fully conformal robot arm sleeve with this weird geometry. It provides real time quantitative information on the pressure distributions, and experienced forces by the robots. So it offers additional sensing modality beyond vision is a more popular one in robotics. Which will be able to enable the robots to make some fine interpolation for desous manipulation and human robot interaction, enable the system to be robust against visual occlusion and so on. It has been further leveraged for Dastus manipulation on customized robot gripper, as well as a full body robot glove for human robot interaction. In going back to my talk outline, I presented recording and modeling of tactile interactions using a digital machine neded tactile sensing garments. But this is only working for one single user who is wearing the garments. Can we further extend this to ambient sensing scenarios? Using environmentally integrated intelligent system? To that case, we would like to present an intelligent carpet, which is similarsly integrated in the environment in the form of a carpet, and it will work for whoever walks on the carpet, not necessarily who is wearing it and with localized measurements. Again, here shows a sequence of human ground tactile interactions, and they are generated by two individuals doing different kind of exercises on it. In this project, we'll be able to estimate a full body pose not only in a single person scenario, but in a multi people scenario, by just looking at such tactile sequences. This video should play by the way. Sorry about that. So our tactile sensing carpet, it's made of very similar mechanism as the previous ones, where a piso resistive layer is sandwiched by two sets of conductive electrodes, which is aligned orthogonally. It formed more than 9,000 intersection, which means more than 9,000 sensors, and you will be able to capture the quantitative pressure information in a pixil format. Let me see. It's not playing again. So we are able to collect millions of synchronized visual and high resolution human ground tactile frames on ten individuals doing different kind of exercises on it. When we look at such tactile images, it's actually very similar to how a camera will look down from the bottom to how you do a different kind of exercises on the floor. But instead of visual information, it captures continuous pressure distributions in a pixilze form. Similar to the previous project, we'll be able to leverage convolutional neural network, which is supervised by the visual information to generate a smooth and consistent full body pose estimation in both a single person and a multiple scenarios. And the predictions do not rely on any visual information, and also includes a diverse set of daily activities, not limited to the ones with direct contact between our feet and the floor, but also the ones with direct contact between our body part and the floor, such as doing push up, sit up, lying down, rolling over, and so on. We observe much smaller localization arrow from key points on our feet, lower torso, when compared with the one from our upper torso and our head and our heads. This agrees with our observations that if we are standing still on the carpet and if we are only waving our hands or tuning our head, the pressure distribution barely changes from the carpet. And therefore, there are not enough information for the model to interpolate such information. And we further enabled a scalable and customized fabrication process for such systems, which will be able to extended beyond the tactile sensing carpet to many other arbitrary coverings of objects in our daily life. They unlock new opportunities for AIVR experiences and biomechanics investigation because it's capturing quantitative pressure distributions in real time. Again, I demonstrated the recording and modeling of tactile interactions, using both a variable system and a ambient sensing system. So this is great. But can we further extend the system to close the feedback loop, not only sensing, but also generating some feedback back to the users and therefore to realize the augmentation of their interactions and behaviors. To this end, I would like to present a transfer of tactile interactions using a digitally embroidered tactile sensing and heptic glove. So the tactile sensors, it is very similar to the one I showcased earlier, serve as the input module. It will be able to capture the real time pressure distributions and send the data to send the data to an agent, and such agent will be able to generate a signal to drive an output module, which is a vibra tactile heptis, generating vibrations of different amplitudes and frequencies, providing feedback back to the users. Such combination of sensing and haptic will enable the transfer of tactile interactions across users. For example, in a piano learning scenario, we'll be able to transfer the sensation of touch from a student to a users, and therefore, promoting a more efficient learning experience, encouraging the students to perform a more optimal finger pressing with actual physical feeling on their hands. I can also realize such tactile interactions transfer between humans and robots. During tele operations, we'll be able to apply the tactile sensors on a robot griper, which capture the real time pressure distributions and then transmitted to the users who is tele operated the system. Then with such tactile sensation, a user will be able to adjust the griper more efficiently and perform the tele operation in more efficient and fast and with more high quality of data. Both the tactile sensors and vibrotactile haptics are integrated in a full size glove through digital embroidery. And different from previous approaches, such digital design and fabrication allows the automatic, modular, and customized fabrication of the smart glove, catering to individuals needs. This including adjusting the size of the glove, have different tactile sensors and vibrotactile haptics lay out, as well as adding addictive marker for visual system tracking. As I mentioned earlier, the tctile sensors are shared the same mechanism as earlier, where a peso resistive layer is sandwiched by two conductive electrodes. But in this case, the electrode is digitally embroidered through the piesel resistive layer, you will be able to convert pressure stimulus into electrical signal through the drop of resistance. The vibrotactile Haptics rely on digitally embroidered, insulating copper wire in coils, and it will generate fluctuating magnetic flux when we applied alternating voltages. When we place a fixed magnet with a fixed magnetic field on top of it, it will generate vibrations of different amplitudes and frequencies. As visualizing by the reflective layer on top of the magnets. I would like to mention that there are also many other different haptic mechanisms out there, including pneumatic, electrostatic, muscle stimulation, and so on. And why we pack vibrotactile aptics is because it generates a linear output, which could realize the one to one mapping between our tactile sensors. And it also obtain advantages in durability, relatively simple designs that is compatible with the digital fabrication process. And with that, we'll be able to evaluate it in a tele operation system. So this is how users tele operating a robot without any feedback, without any visual or haptic feedback. And then the users will be able to just perform the grasp as hard as they could because they have no idea how this work. So when you are dealing with conformable objects, the object will squeeze and generate a huge deformation, which is not ideal. On the other scenarios, even without any visual information, when the users are feeling such tactile sensation through the haptic feedback, they will be able to adjust their grasp accordingly in real time so that to perform more optimal grasping on deformable objects. In other scenarios, we would like to realize such tactile interactions transfer across users. This is a very key step to realize the transfer of skill, experiences across people, space, time, and so on. But this is a challenging task because every individual have very subjective sensation to the haptic feedback. And I would like to talk more about this by zooming in on how different individuals would perceive the heptic feedback. So usually, when we receive a heptic feedback, we'll process the information and react to it with a specific action. And such action could be quantified by our tactile sensors. But such reaction process is highly dependent on individual's perception and decision making process. Therefore, when we are trying to encourage an unknown users toward a predefined target action, we need to make sure that we are actually providing an optimal and effective hectic feedback to guide them through the process. Traditionally, people would achieve such generation of adaptive feedback by performing calibrations on individual users in individual scenarios. And obviously, this is not an efficient process. And we would like to improve such adaptive feedback generation by presenting a learning based optimization pipeline. And the goal of such pipeline is to generate an adaptive and user specific haptic feedback, that will guide a specific users to perform an action that is guided by the predefined target action from an expert user. This pipeline consists of two steps. First, we'll develop a forward dynamics model, which serves as a simulator, simulating a specific user's behavior when given a specific hectic feedback. Then after having such complete forward pipeline, we'll be able to perform an inverse optimization by minimizing the differences between the user's behavior and the predefined target behaviors. And I'm skipping a lot of technical details over here, but here is a proof of concept on the results on our approach. So here we would like to use haptic feedback to guide the specific users to play a car racing game by only pressing one single key on the keyboard. So this is a very simple task. And then we recruited expert player who has been able to retrieve in very high score in the game to play different levels of the game. We recorded their finger pressing sequence with the tactile sensors at the fingertips. And with that, we'll be able to directly extract what we call an unoptimized haptic feedback by simply thresholding and generating generating an optimized haptic feedback through our pipeline. We then recruited new users who are just getting familiarized with the game to play the same level when they are provided with such unoptimized and optimized hectic feedback. Again, the new users finger pressing sequence is also captured by the tactile sensors on their fingertips. And overall, we observe much better alignments in terms of performance and the reproduced tactile sequences when the optimized heptic feedback is provided. And such improvement mostly comes from the shifting of time. So in the optimized haptic feedback, you can notice that the whole feedback sequences actually shift forward. And this is a way they optimize for a user's reaction time. Again, here is some quantitative measurements on users performance on the game when they are not provided with any haptic feedback with unoptimized and optimized haptic feedback. Interesting. Also we observe gradually increasing performance of users when optimized heptic feedback is provided, especially when the levels are getting harder. When the levels are getting harder, indicates that people are getting less intuitive information on what is the correct action to do. And that's the place where the external hint will shine. And also interestingly, we observe a slight drop of performance when we are providing an unoptimized heptic feedback. And this is explained by a lot of users reporting that the unoptimized heptic feedback actually distract them during the game instead of helping them. This reiterates the importance of having an adaptive and effective heptic feedback when we are trying to augment people's behaviors. And as you can see, that was a super simple proof of concept. And in the future, if we are tackling for more complex activities, I would imagine such inverse optimization pipeline should be replaced by AI agents, which will be able to constantly taking in information on users current action and generating feedback to guide users for the next movement. It's also very similar to the reinforcement learning pipeline in robotics. Which allow us to adding in a reward function and removing the predefined target action, increasing the design space of such systems. And going back to the talk out line, I presented the recording, modeling, and augmentation of tactile interactions as examples of our approach to record, model and augment physical interactions. Overall, we are trying to develop, scalable and customized sensing and actuation interfaces so that it enables a similar yeah comprehensive observations on humans behaviors. With such scalable and diverse data, we'll be able to leverage the data hungry machine learning techniques to extract in depth analysis and also generate the most optimal feedback back to the users. With that, we'll be able to develop innovative strategies to augment both humans and robots. I would like to talk a little bit about my future work and ongoing work. This is my future vision. I imagine that in the future, everyone will have a personalized variable intelligent assistant with the integrated sensing and actuation capabilities. So that during our daily activities, the sensors will be able to sense capture information on our activities, feed the data to AI agents, which process the information and drive the feedback interface, augment our interactions and behaviors. We have a fancier name for this. We call it embodied copilot, because eventually, it should be like a copilot system, just like the digital copilo system like C GBT, like all the copilo system in PowerPoint in Microsoft products. It will be able to facilitate and collaborate with users in real time. But it will feature on physical information, capturing physical data and providing physical feedback. And this would requires joint effort from both hardware, software, as well as the evaluation and application side of this. So I am aiming to expand the design space of intelligent textiles to multi function, multi modality through hybrid fabrication. So I would like to extend functionalities beyond sensing and actuation, further to energy harvesting, sustainability, and even computation and so on. And also extend the sensing capability beyond textile sensing, further to proprioception, muscle activity and so on. And such process would be enabled by the combination of digital textile manufacturing techniques with other rapid prototyping tool. For example, we can perform inject printing on digital machined fabric to further reinforce its mechanical property. And we can perform digital depositions on one single fiber to integrate multi functional materials for multi model sensing capabilities. We can also further combine it with the silicon based technology, integrating complex yet small chips into textiles, using digital embroidery and fiber drawing technology. Something we are thinking about is like sustainable intelligent textiles, where we will be able to recycle the electronics components mounted on textiles and then reuse it, plug it back to the textiles in a more easy manner. And we are also thinking about we will be able to repurpose the textile. Let's say you have a smart textile, but the textile will be able to grow, grow with the baby over time so that it can be able to improve the sustainability in the intelligent textile system. And then the other thing we are very interested in is demarketizing such sensing, fabrication, and design. One example would be on tactile sensing because I talked about a lot of tactile sensing. So during my previous research, I realized actually, people from diverse domains and communities all want to use tactile sensors for some specific applications. But they have unique needs. For example, as a glove, people have different needs for sizes, for different placement of sensors, and for robotics, they have different geometry of the grippers, and so on. It has been really hard for them for non experts to be able to make designs catering to their needs. So we are thinking about developing design and fabrication tool, where we will have a UI that users could import the arbitrary shape and geometry, and then they can select the sensing area, and then it will generate a low level fabrication file for immediate fabrication, using digital fabrication process. And also similarly from the firmware perspective, we'll be able to develop a toolkit allowing users to define the data serialization parameters very quickly. They also have huge implications for open sourcing the hardware for broader impacts for educational and outreach events. From the data and modeling perspective, we are as the foundation model has been sky rockets, we are thinking about how such physical information will be able to contribute to the foundation model and how the foundation model will be able to help the development and application of such physical interaction interfaces. The first step we are thinking about is we would like to first collect a multi model data sets. We develop such System in the kitchen environment, capturing human activity in a kitchen environment, including chopping, making the plates, and so on. We are able to capture both tactile interactions, muscular activities, environmental and first person camera view information, and so on. Such multi modal data sets allow us to think about how to translate such traditionally elusive physical interactions to a more interpretable font factor. For example, if we will be able to translate a sequence of tactile information into a sentence, which will be understandable by a lot of people, it will dramatically improve the accessibility of information. Then other level of transfer of information could be helpful for learning and educational scenarios. For example, if we are learning delicate sensory motor skill, if we will be able to transfer the information from a trainer or a teacher to a student, not only in visual in text or audio form factor, but also in a more physical setup. It feels like someone is actually holding your hands and guiding you through the motion, and it will dramatically improve such delicate sensory motor learning skill. And lastly, I will envision such embodied copied system will be dramatically useful in healthcare and robotics. So our systems is scalable and customizable. So it will enable customized interfaces for each individuals catering to their needs, which will unlock personalized data modeling, as well as a healthcare delivery. So one application scenario will be in the rehab will be in a rehab application where different patients will have different needs, requires different customized hardware set up modeling and delivery. And also in robotics, we will be able to leverage such systems to equip robots with human like sensing capability so that it can perceive a lot of the sensation that traditionally, they can they cannot capture using cameras. So such sensory information is super important for natural human robot interactions, as well as dst robot manipulation. And this could be achieved either by imitation learning or we simulate such sensory system in a simulation environment, deal with the syntoo gap and then use the traditional reinforcement learning for robotics. Example would be we are trying to equip a full body tactile sensing scheme for different robotic systems in a robot dog and in a human nod robot system. So the reason is that humans actually have sensory system throughout our human body. But when you look at a lot of the robotics competitions, a lot of the robots fell because they don't have the real time feedback from their lower limb from their arm, which is causing a lot of problem for their control. So we are hoping to adding in such layer of information to help them with more dasrous and robust manipulation. And the other thing is going to delicate task. So here is a tele operation system where people manipulating a chopstick to pick up a cherry, which is super hard, even for humans. So then you could imagine it's even harder for the robots because they don't have any sensory feedback on whether you have grasped it hard enough. So we are also trying to integrate the system for such delicate task. With that, I would like to thank my advisers, collaborators, mentors, and manes. None of the work is possible without them. And thank you for joining me today again and thank you for listening. Alright. Thanks so much. So we'll have a little bit of time for questions, and I'm gonna kick one off. That's okay. While everyone else is thinking. So I was really struck by the breadth of the talk, really interesting and exciting works. Can you tell me a little bit about your thoughts on the reuse Aspect? So in my mind, I was thinking of two things, reuse like patchwork quilts. I think there's also different traditions that use this kind of patchwork textile. That seemed exciting. But you also mentioned this kind of, like, it grows with you. Can you just give a little more details on that? Yeah. So I think the reuse one is kind of coupled with the recycle one. So the main idea is, like, a lot of the smart textiles nowadays, they have very expensive components on there. So either you want to recycle the very expensive components by detaching it and reattaching it to desired location. It also have implication for repurpose of textile. For example, now I want the sensors on my wrist, but then I change my behavior. I now want the sensors near my heart. So in traditional scenario, we will need to reorder a textile. You will need to refabricate a textile. But it's actually unnecessary if you will be able to take it off and then you just replace it somewhere else. And also, you can change sensing mo I imagine you could change sensing modality by doing different kind of recycle and reuse procedure. For example, for EMG sensing, you usually need a ray of electrons. But for a lot of the proprio section sensing, instead of an rate of electron, you actually want something bigger, which will be able to capture such rotation. So if you will be able to re arrange the electrons, and then you will be able to achieve the change of functionality as well. Yeah. Sorry. Powers. So the first one is, how do you power the sensors, particularly the embroidered gloves that you have. The second is, does that inhibit people or participants from using the glove efficiently or effectively? That's a really good question. So I think the two parts of the glove. One is a sensors, one is a hectic. So the sensor is a passive sensors. It's read out through, like, a very common dino nano controller, a multiplexer, multiplexing through roads and columns, and then doing data serialization at each individual sensor. And then the vibrotactile Haptics is powered by we make a array of H breach so that we'll be able to multiplexe through in the same manner as we do on the sensors. So it's powered by the battery now, and it's quite power consuming compared with the sensors. So that has been a challenge. I believe for a lot of other haptic devices as well, that people need to carry the battery around. So at this point, we don't have that many haptics on the hands. So a very small pack of battery is good enough, but as we are increasing, we are scaling up, then I think power would be a huge problem. But in terms of sensing, I think there's actually many things we can do to reduce power. For example, we can do intermittent sensing, right? Especially for tactile sensation. Actually, when I'm standing still here, the tactile imprints barely changes. So if it doesn't change, then it doesn't embed information. So we can lower the sampling rate when there's no change and only increase it when there are actual meaningful changes in the signals. Um. But for now, I would say it's all everything is packaged in a PCB board. They mounted on the wrist, so it's not stopping the users from using it. But for actual commercialization, I think more considerations needs to be taken. Great talk. You mentioned using haptic feedback as a method of knowledge transfer from human to human. Have you at all considered using it for, like, rehabilitative knowledge transfer? So like, for example, a stroke patients suffering from hemoplesia, needs to re learn how to use, for example, their hands. Would they is that a application that you think that this would be helpful for? Yes. That's one of the most ideal opplications. In my opinion. I think because rehab, it's a very special scenario where a lot of patients would expect patients to do rehab at their home during their daily activities so that they can do it constantly for the best performance. But imagine the patient is doing it at home. They only have a piece of paper showing or what is the correct action to do. And they have intermittent visit to hospital meeting with the PT to tell them what is the correct action to do. And this information might not be enough for the patients to realize what is the Most optimal way of doing it. So in my opinion, they need some real time physical hint or physical guidance to at least alert them when they are doing something wrong and it's actually hurting them. So I think that's actually something very useful for, like, a more convenient and continuous rehab activity. Yes. An. C. Thank you for wonderful talks. Like, I have a question about your first, like, two projects introduced about, like, utilize like the tac talk, sensor data and fabrication to perform position estimation for a single or multiple users. So I've noticed like you use the grand true data in like in machine learning algorithm, like, such as like CNN to conduct the sensor correction. I'm just kind of wondering, like, where did those like grand truth data come from? And how is it collected? Yeah, 'cause like initally, I feel like if it's collected through sensors, there must be some like errors or like virus introduced in the datasets, so yeah. Yeah. So for the correction pipeline, the ground shoe is coming from the digital scale reading, like when you're pressing it. But as you imagine, I have like thousands of sensors, but I only have one digital scale reading. So it's kind of a very weak supervision. So that's why we kind of leverage the spatial and contemporary information spatial, meaning like this Spatial is kind of embedded in the convolution neural network itself because it's doing the convolution. And then the temporal information is by when we are inputting a sequence of tactile images. So the network has information on what is previous framed and what is following frame. So this temporal information also helps. So that's what it's a ground sooth coming from. And for the other classification and regression problem. So for the post estimation, ground shoes coming from a mo cap system that a person wear while they are collecting the tactile signal. And in a carpet scenario, ground shoes coming from a vision system, we have a camera, and then we use off the shelf vision based post estimation, open post call to extract the ground shoes. And then yeah, wonderful. Thank you. Thank you, Professor. That was a great talk. I had a question regarding the way the sensor works. Like, right now, you're talking about only the orthogonal loads and stuff. And I noticed that the textile is also stretching and all. So are you also able to get information from the sheer side of things in terms of, let's say joint angles and stuff like that, and the applications of the same. Yeah, so the sensors itself, is only capturing normal pressure. It's not capturing any stretch or any shear forces, but it has all those like bending and stretching has an impact on the sensor. So, for example, when we wear the glove, if we are bending our hands, the case actually overlaps a lot, and imagine in the glove, they will actually induce some pressure signal from it. So I will say those People say that's noise, but I don't think that is noise. That's kind of some information because you can try to decouple a person's hand poles from the actual interaction with the objects by trying to differentiate those signals. Yeah. Hey, thanks for the wonderful talk. Really interesting work. I was wondering the Pizzo electric nanoparticle material. How well does it hold up in a washing machine? Okay, you're asking the hard question. So it doesn't handle washing machine. I handle hand wash because it handles water, but it doesn't handle the sharp objects. So it handles all the bending, all the things, but once you have a needle, or you have some heart, like, sharp metal thing goes in, it will break. So that is not ideal. Washing machine is not ideal in this case. Hi. Yes. Hi. Thank you. We someone else. Thank you so much for the talk. I was curious. Do you envision the articles of clothing as being very form fitting, or are you able to account and approximate for looser fitting clothes as you saw in the video with the more elderly woman? It was a very loose sweater. And so can you take that into account from the sensing side? I know from the haptic side, you'd probably need things to be quite form fitting? Yeah. I think that is a really good question. All of the examples I show are kind of like in between tight feeding and loose. But I think loose feeding is the way we should go for eventually. And that makes things much harder because your clothes are folding around. And but I think there are a lot of research on that trying to make the whole system more adaptive. Uh, not only more adaptive to different people's size, but also more adaptive in terms of the signal. And then as you move as different people wearing the same piece of clothes, the sensors might fall into different location. So how do you make that more adaptive to, like, normal daily wearing? That has been mostly people have been trying to address it from the algorithm perspective nowadays. But I think that is a very key component for the research to make it work in the wild. Thank you. S speed or more. A