Welcome to the Iran seminar for two or three spring semester. So today we're glad to have Dr. Veronica Santos. And then she will give a talk about getting in touch tactile perception for human-robot system. I want to give a brief introduction to data centers. She is a professor at the Department of Mechanical and Aerospace Engineering, and she is our director of the US Bio Mechatronics laugh. She's served she served as Associate Dean for Equity, Diversity and Inclusion and also factor at first for UCAS cool engineering. She owned most bachelor degree from UC Berkeley, and it was a minor in music and music, which is very unique. And she did a PhD in canal, and she first took a post-doc training at the University of South California. And as they come to every class that as assistant professor, she received a lot of awards, including the NSF Career Award, US US defense science study group, and you as the National Academy of Engineering Frontiers of engineering education is important and in many teaching awards from Arizona and also area. So she has served as several communities and also editor of the actual body haptic symposium and also editor of several journals. So now let's welcome Dr. Santos. Thank you so much for having me. The weather is great compared to the flooding that's going on in California. So it's nice to be able walk around here without an umbrella. But it's a pleasure to speak with you. Feel free to interrupt me at anytime with questions and there will be time for questions at the end as well. But I direct the Bio Mechatronics lab at UCLA. And our mission is to improve quality of life and quality of work by enhancing the functionality of artificial grippers and human machine systems. We do that primarily by thinking about touch. Touch for prosthetic hands, touch for artificial hands. And before I dig into my talk, I just want to put a plug-in for the fact that our School of Engineering, all departments are hiring is that if you're at that stage of your educational career and also tell you about this 10,000 square foot collaborative robotics space that rebuilding that will have six robotics *** from three different departments. Lot of complimentary type of robotics. And a third of that space is, is for communal use for anyone in our School of Engineering that is interested in robotics. So this is really exciting where we're just getting started on the construction of this space. But this is just a little teaser of things that our lab works on. The human side that I won't get to talk to you about today, but maybe over lunch or after this talk we can connect. But we have studied responses to unexpected perturbations, looked at multi digit coordination, looked at things that are seamless for humans like object handover, but still difficult for human robot handovers. Recently we've done some eye-tracking to infer intent, human intent during activities of daily living and using eyes is as a guide for where we think the hands are gonna go to pair for robots that will interact with robots for assistive activities of daily living. And we've also done some work, work on human search and retrieval as a precursor to some of the robotic work on tactics search and retrieval that'll tell you about a little later. But I wanted to begin and end this talk with some videos that you might find interesting and kind of you reflect where I began my career in how I've circled back to it. I started out in neural prosthetics, specifically upper limb neural prosthetics and artificial hands and had to move away from that for various funding reasons. And now I have found a way to get back to it. So I would argue that you can, you've seen a lot of hands that can do very cool things, fast things, extra things. They don't necessarily have to look like human hands. But we're interested in tasks that humans are interested in. And I would argue that movement without touch is limiting in that you're missing out on all those rich finger object interactions. And the improved planning and control that you could do if you had a complimentary modes of sensing such as touch. So I'm going to start with this video is a PBS NewsHour segment from 2015 and then we'll come back. But I'm currently collaborating with Dustin Tyler's lab at Case Western Reserve University and he's featured here. When I think about the, the importance and potential use of touch in human-robot systems. I think about human-robot systems as this continuum. And so I'm going to have little slides like this with just very pithy thoughts about how we think about touch. So the first one is that touch is useful if you think about the continuum of human robot systems, you can have robots inside the body, robots on the body, robots working in close proximity to the body, whether it's for activities of daily living or co, manufacturing. But then they're gonna be some scenarios where you don't want to put humans in harm's way. So you start moving that robot that used to be connected or inside the body farther and farther away. For extreme environments, search and rescue, handling of dangerous materials, maybe they're buried objects that might explode. Maybe you're doing the nuclear decommissioning or you are working in very extreme underwater environments where humans scuba diving is not really a possibility or preparing for living in space. So there's this whole continuum. Each of these different problems, spaces have different needs. And that's why you'll see a little bit later. I don't believe that there's any one tactile sensor to rule them all. It really begins with your application and the environment and other constraints that go along with that. And I like to say that vision is great. We use vision when we can for planning. There are some things that vision can not tell you, which can only be gleaned through proprioceptive means or through direct contact that are not easily visible. When you don't have a direct line of sight or your hand occludes the objects. You're working in the dark behind obstacles inside your pocket, you can find the power button on your cell phone without looking. Maybe in extreme environments and smoke in turbid water, that they're gonna be times when you don't have vision and having a complimentary sense of touch would be very useful for maintaining system stability. I also threw in here some very challenging types of objects, and these are deformable, deformable linear objects like rope, deformable shells or sheets, like bills are pages of a book or deformable bulk materials for which we do not currently have analytical models of how these deformable objects might work or might deform in response to the contact. We have the visual outcome of that. But that's where also having tactile sensation when you don't have a line of sight would be useful. Then I throw in what if you're working with something that is not only a bulk, bulky, deformable, but also animate. And it's doing things that you are not going to predict. And then I had to throw it in hand manipulation because this still remains a grand challenge for robotics and I believe that touch is important for that as well. So all of these things, many of which happened in our day-to-day lives we take for granted. Second little remark is that touch is multifaceted. It's very complex. It has lots of different features that you can leverage if you find it useful. And there are debates about this. But if you want to take a classical textbook based approach to looking at the biological mechanoreceptors in the fingertip, there are four types, and you would split them into the superficial, very close to the surface of the skin. Those are the type one. Then you have the type T, which is a little deeper down in the tissue. And within each of those two spatial locations, you have the fast adapting F1, F2. These are the mechano receptors that respond to changes in transients. In contrast to the SHA-1, which are more of the slow static. And we get a lot of bang for our buck in our robot experience by focusing on those essay type sensors. Because a lot of the tasks that we're interested in reliant information about local finger pad deformation. And it's the SHA1, SHA2 that tell you about that. But there are obviously other tasks. Maybe you want to haptically explore a texture that's where the vibration sensing f2 could be really useful. So again, it goes back to your application and your environment. But I would argue that everything that we've done, aside from trying to grow. New human fingertips is the engineering include you work around to recreate things that biology has already made beautiful for us. I'm not saying that everything needs to be bio mimetic, but it surely, sure is inspirational. See that this already exists and are there principles that we can try and recreate as engineers? That's the fun part. So here's the part where I say we are agnostic to the type of tactile sensor that we use. A, we will use the sensor that encodes the actionable information that we're interested in. So that either a human operator or the robot itself can make a decision as long as that sensory encodes the information and we can decode that to make our decision. That's what we care about. So we worked with all sorts of different types of tactile sensors. I'm starting off here with the low-cost ones that are good entry points for people that actually, Charlie Kemp sitting up here helped us. This paper would not have been possible without Charlie because at the time no one had this pizza resistive fabric. But Charlie and his doctoral student at the time, Dr. topo, but the chariot had this. They sent us some scraps and we put together the sensor and collaborate paper came out of it, so so thank you. Oh Danny. Okay, Mark killed that yet. So a lot of connections here. But the point is that if you want to do something low-cost, maybe not super high resolution, but covers large areas, which is something that I think our field needs to think more about. You can embed pH resistive fabrics in these rapid bowel things. But another approach is to take the barometers and leveraging all the advances that have come about from the scale and worldwide availability of cell phones. Take a barometer from one of those cell phones embedded in elastomer. That idea came from Rob woods lab. Now I take it back. Rob, how's lab? I'm sorry, Rob, how it at Harvard and potato Carly has put, he's at Columbia, has put a spin on it where instead of having this flat area barometer is embedded in elastomer, you can tilt some out of the plane. And now you can put them in whatever form factor you care about. And using machine learning for pattern recognition in theory, back out a 3D force vector. So these are very low-cost things that you could do without specialized equipment in your lab. They're better than nothing. They're not super high resolution, but depending again on your research question, they could be just right for what you need and budget-wise, just right as well. Going a little more advanced, I have a collaboration for more than a decade now with Jonathan Posner. We started off as next door office neighbor is at Arizona State. He went off to University of Washington. I went to UCLA and we've continued this collaboration. But this is the very first tactile sensor we ever made with micro-scale channels in a PDMS, very soft, stretchable elastomer. And you feel the microfluidic channels with a liquid metal. And you use that liquid metal channel as a fluidic wire so you can stretch it, twisted, bend it, it won't fatigue. The dirty little secret that all microfluidics people know about is that the connection from the liquid wire to the rigid wire is the most vulnerable part. And no one has figured that out yet. If this thing has a problem, it's because the wires will pull out at that transition point. But this is a capacitive based approach where it looks like these wires are intersecting with one another, but they're actually, if you look at this cross sectional view passing over one another and where they pass over one another, they create a little capacitive Taxol, which is the tactile analog to a pixel. So each of these little units, five-by-five grid of pixels that only since normal force, because it's a capacitor, it's only going to sense really the force that brings these two wires together. We then transitioned to a resistive based approach. So now imagine that you have strain gauges, but they are fluidic. There are tiny. So this serpentine design is just a liquid metal strain gauge that we have put on different sides of this nail bed here. And the reason we did that is that if you take your own finger, press it on table for your sandwiches down, put your finger on the table and shift your finger left, right? If you think about where the string is that you're feeling, it's where your skin is attaching to your fingernail. And so you can put these types of strain gauges near the nail bed to measure here while leaving all of this other area open for your normal force taxes. So that idea came about through some work with sensory, I'll show you about in a minute called the biotech, where we realized that you can also leverage deformation information from the whole finger pad, even in areas that are not directly contacting the object of interest. So that's why we put a pair of these on either side to infer shear this way. And then if you put a pair running this way, so that's the y-direction. We put another pair here for the x-direction. Now you have two d Shear. And then you put in this open area where you think there's gonna be direct contact. Now we transition to these isotropically design spiral strain gauges for normal, for sensing. Now we have 3D force. But the coolest thing we found was by transitioning from the capacitive to the resistive approach. We didn't have that capacitive like recharge delay. So we can actually measure vibrations up to 800 hz with this vibration or with this resistive approach. And most recently we have taken that design and modernized it for use Underwater. We had to create this whole new setup that we never had before to test them underwater under pressure. So imagine going hundreds of meters underwater and making sure that not only will this not leak, but will it still be sensitive under very high PSI ranges? And so again, I think we have a lot of work to go. You have another two years coming to further make this robust. But this is, to my knowledge, one of the first types of multimodal tactile sensors that can be used under water, under pressure for things that the Navy cares about, e.g. instead of sending a human scuba diver to blindly feel on the holds of ships for mines, or around pier pilings. You could have a remotely operated robot with touch, either making some semi autonomous decisions, are sending some of that information back to the war fighter to make their own high-level decision. But underwater tactile sensing is pretty exciting. We're just getting started on making the devices. But there's still a lot of research to be done. Because if you think about how tactile sensors work, unless you're thinking about thermal stimuli or chemical stimuli. It requires a mechanical deformation. So if you don't have the right frictional properties than that deformation energy does not go into your sensor and you don't sense anything. So testing these underwater, figuring out the threshold for grip forces where you still get some grasp in oily conditions or whatever, that all that work remains to be done. Many of you are probably familiar with camera based approaches to tackle sensing. Fairly new, but it has really allowed for an explosion in the tactile sensory area because computer vision people can take all of their algorithms and directly apply them to a different type of data which opens up all of these other applications. We have a collaboration with Ted AIDL Senate and my t, whose team created the gel site. And we've got a variety of gel site evolutions. But what you're seeing here is one of the older versions that's a little bulkier than what you can get now. But the key is to have an elastomeric finger pad. Their secret sauce is some highly reflective material on the inside. You base, you put a camera from the inside. So imagine you're inside the finger looking out at the world. And you can even enhance your tactile images by putting dots. Some dots are shown here. And by applying known stimuli, 3D forces and torques and measuring the visual change in that array, you can map geometry, local shape, forces, torques. And so this little video is showing what it looks like when you press this gel site against cables that are also partially buried and granular media. That idea. Other people, including Facebook now meta, have created the digit. This started out as an open-source recipe. So if you want to build your own, the recipe is out there. Um, but you can also, they have now a partnership with gel sites. You can buy gel site minis and they're very affordable. So if you're interested in that, that is a nice pathway to entering into this computer vision based tactile sensing approach. The sensory you're going to see the most in this talk is the biotech. And that's only because I worked on it when I was a postdoc, way back when it was funded by darpa for their revolutionizing prosthetics program. It was meant to be put on prosthetic hands, which is why it's bio-inspired. It has a rigid core like the bone in your finger. Elastomeric skin that's inflated away from the core by conducted fluid. And we use primarily this array of electrodes. I'll show you the sensor without the skin a little bit later. But if you look at the spatial, temporal changes in electrode impedance, you can infer the overall finger pad deformation. And if you tap into this fluid volume and put an off the shelf pressure sensor, now you have a hydrophones that you can sample slowly to get a very crude measurement. If overall fingertip forests or up to 2,200 hz of sampling rate, you could infer vibrations and estimate the properties, the texture of an object that you're interacting with because skin actually has a fingerprint on it to amplify the vibrations. If you use it too much of the fingerprint wears out. It's meant to be a consumable. But you've got vibration, finger pad deformation overall thing it took for us and there's also a thermistor in here. And if you measure the rate of heat conduction away from the finger, you can infer material properties. And what my lab primarily does is look at the finger pad deformation. We collect all of the data. We do ablation studies on the backend and we find that most of the things that we have cared about recently just traced back to local finger pad deformation. I should make a disclosure that I am still involved with touch on their border advisors. Alright, so I've shown you this array, the smorgasbord of different tactile sensor types. And these are just devices that are going to take mechanical stimuli and turn them into digital stimuli that you can measure. But it really doesn't take you all the way to features of interests that humans care about. So the next point is that touches abstraction. So haptic perception, perception, which I distinguished from sensing to me, sensing is just having a device collecting some stream of numbers. But the perception is abstracting those streaming numbers to something meaningful like texture, hardness, temperature, weight. So if I asked you what is the roughness of the chair that you're sitting on? Imagine what you would do. Probably rub your finger across the chair to infer the texture properties. And that's exactly what Lederman and classically found in this classical 1987 paper, where they had participants come in, gave them a bunch of objects and ask them to describe the objects and then observed what these exploratory procedures or exploratory movements that were made by the participants. And there are very classical specific movements that you make when you're trying to estimate specific property. So if it's hardness, you squeeze the object, if it's temperature usually have a static contact rubbing for texture, using the whole hand for global shape, but actually following the contour through finger for local shape. And all we did was replace those exploratory movements on a robot with the biotech sensors. This was done about eight years ago, but we did work on edge orientation. The edges of different thicknesses, edges at different orientations relative to the body. Because we thought that that would be the precursor to the next level of decision-making for contour following. We also did bumps and pits of different sizes and shapes, some that were flat, pyramidal hemispherical, different shapes or sizes relative to the finger. And what you see in this little inset is how the electrodes are changing, what the impedance is of these different electrode regions on the fingertip as the exploratory movement happens. Then back then we were using support vector class, support vector machines for classification and support vector regression. We use convolutional neural networks these days, but that's the approach we took back then. More recently, we've done tactile perception of directionality. So if you're holding an object like your water bottle in front of you and you set it down. And there's that relative motion of the object within your grasp as the object contexts a table. We want to be able to measure that directionality because you can do that with your eyes closed. So what we did was we flipped it when we collect collected data, instead of perturbing the object relative to the fingertip, we perturb the fingertip relative to the object and just collected thousands of trials. And you can visually see here the finger pad, the farming. But we're collecting all the information, especially the impedance electrode data from the biotech. And then what we did was, here's the sensor without the skin. We basically took these 19 electrodes, map them onto this manifold, be interpolated between them assuming that they're all bays and the same continuous Bladder of fluid and interpolate between them to create this manifold. And when there's a positive change in electrode impedance relative to the baseline contact. That tells us that the skin is getting compressed toward the core of the finger. And if we see a negative change relative to that baseline, the skin is bulging away from the container or bulging away from the core of the finger. And then if you have this manifolds, you can map it to grayscale or color just to get this quote unquote tactile image and then apply traditional convolutional neural network approaches. I remember talking to Ted Olson when we were doing some of this work. And he said, if you as a human can visually see the differences and categorize them in your head. Pretty much guarantee and neural network can do that for you. So we use convolutional neural networks to distinguish between these. We actually did it for thousands of random angles within this 360 degrees. But I'm just showing you that the key orthogonal version so you can compare them. Once you've learned that, then in real-time you can use this train model to infer tactile direction. So what you're going to see in this upper left image is a little video with a line and that line, I'm going to show you on this 360-degree pi, what the robot thinks is the direction that the object is moving relative to the fingertips. The first example is a handover and Dr. Gutierrez program that so that the robot would only release the object when it felt like the object is being pulled away from the poem, which is out in this direction. If you have this ability, then you can design in context appropriate responses from the robot when they're interacting with an object or interacting with the environment in this case. So when you get that relative upward motion, the object or grasping is probably coming into contact with the support surface release. Grasp. This is one of the more exciting things that we've done recently because typically tactile sensing is done in an open-air environment where it's very easy to tell if you're in contact or not and what to care about in your stream of data. But when you plan your hand into sand or snow or rice, any type of granular media. Now you're stimulating everything. So then the perception problem becomes about how to find the feature of interests that is now buried and all this other distracting tactile noise that's generated, generated by the granular media. So what you'll see in the upper right-hand corner, it are two different versions of things going on at the fingertip. The top one is actually the torque at the, measured at the base of the finger and the bottom is the estimated force on the biotech finger. And we're going to see is as the finger passes through the granular media, it will flow freely through and there'll be some kind of baseline amount of force on the finger. At some point. There'll be what's called granular media jamming. And Daniel Goldman in your physics department here on campus, is an expert in granite or media. He could probably tell you more about grant and media jamming. But my understanding of it is that these granular media particles can act like liquid or fluid or something in-between depending on how densely packed and the frictional characteristics between the particles and so on. But imagine that you're able to just freely move through the Grantland media. As your finger or intruder pushes through the grant and media, some particles get out of the way. But if you have something buried back here that is stopping the particles from getting out of the way. They start to bump up against one another forming what's called a forced chain. So there'll be a set of particles that will then, when you're way out here, you're going to feel like you're touching the object, but you're actually feeling forces through this force chain. So we thought this was a liability because when you're trying to do occupancy mapping and create a map of something that you're trying to find that's buried. How do you know that if the object is that big and you're directly contacting the object, or if the object is much smaller, has a different shape, is in a different orientation and you're just getting tricked by the granular media Germany. But what we've discovered is that you can think of it more like a proximity sensor. So you have a tactile sensor, you put it in the ground and radio. And now all of a sudden it becomes like a proximity sensor. And if you understand the characteristics of the granular media that you're in, then you actually have a more conservative safe threshold where you can stop the finger back here because you understand the jamming and then you can use your understanding of the grant and media characteristics to refine the map of what's buried. So you'll see little white particles that we believe are part of this quote unquote, soil failure zone that moves ahead. Think of that as like part of a LIDAR or something but in sand and it's moving ahead of the finger. And then when it, when that failure zone makes contact with the object, you'll see a spike in or a steady increase in the force on the fingertip. Here's that soil, soil fear of failure zone. It's going to make contact with that buried block way before the fingertip. We actually stopped the fingertip before there's contact. But you can already see a reflection of that grandly immediate jamming. So we are using this to our advantage and leveraging the fact that granular media turns are tactile sensor into a proximity sensor. And, um, I'll just play this little video here. We're imagining through free movement of the sand. You have the soil fear zone, but as soon as you come upon something, Barry, you're gonna get jamming. This is our first foray into this. So the classifier is pretty simple. It's basically using all the data we have from the biotech and saying, are we freely moving through sand or are we actually in contact through ******* with something that's buried? And then you can replay this, combine that with Bayesian Hilbert maps and occupancy maps to create this probabilistic map of where you think something is buried. I'm just showing you to exploratory movements or EMs here. But Dr. Zhao also extended this to select the next exploratory movement because you can imagine touches very expensive and especially if you're interacting potentially with something that's dangerous, you want to minimize touches, you want to minimize large forces. So based on the current map, you select the next exploratory movements that would be the most efficient to add information that would reduce uncertainty. So this is what it looks like after seven exploratory movements. And of course, it's up to the designer to set the threshold for accuracy in determining when the robot just stopped. But here the robot is doing this semi-autonomous be, maybe you want to send that information to. Remote human operator, or maybe you want someone to be monitoring multiple robots doing this all at the same time and just kinda check in at a, at a high level. But the point is that maybe this is not directly working with individuals with limb loss, but at least it's an application that might prevent new individuals with limb loss. So that's one of the ways that we think about this work. Alright, So touches expensive. You can't go online, cannot go online and just download millions of images that you might be able to get through some of these repositories. I think there was a push to create some open-source shared repository of touch information. But I think that's a very ambitious approach because even with my own robot setup, my tactile sensor, my experimental testbed throw on some granular media. Every trial is different. Then even if a colleague tries to recreate that in their lab, the subtle difference in fingertip movement, maybe it's a slightly deeper raking motion, maybe it's a slightly faster motion. All of these drastically change the type of information that your tactile sensor encodes. And I'll show that a little bit later. But the point is that touches expensive. We don't yet have good analytical models for especially the biotech. You might be able to model how the finger pad elastomer, the forums and the fluid inside but bridging that finger pads shape to the 19 different electrode readings. I don't know how to do that. And I believe that NVIDIA has done this and published it, but they modeled the mechanical deformation and then bridge that last gap with neural networks. So how do we learn to use tactile sensor data? When it's so expensive, it costs student time. It causes where on the robot. And it just, it's very expensive. So we started entering into this reinforcement learning branch of things that we're working on. This was our first foray into it. So it's pretty simplistic reinforcement learning approach, but we are using contextual multi-arm bandits. And in this paper comparing it to Q-learning is a benchmark. But the point is that we were basically doing a contour following task with a deformable object like a Ziploc bag. We did not have a model of the tactile sensor. We did not have a model of the bag. And by learning through experience, we were able to learn a policy for closing the bag. So what you have here is an arm with biotechs. We set up a camera and the camera is basically meant to automate the reward and learning process. But once the policy is learned, you don't actually need vision, you just need touch. And you might wonder why, why use touch anyways, isn't it sufficient to just know the direction of the zipper and follow it? Well, here's an example of you could easily visually distinguish the line for the zipper, but it's very easy for the finger to slip off the zipper. So you're going to need some step-by-step adjustment of the finger relative to this deformable objects. And that's where we believe that touch is very important. We set up a very simple state action reward space. We said, okay, very crudely, let's split the finger pad into different states. Low, center, high, these regions on the fingertip. Let's, if you want to make it all the way across this five inch long or whatever zipper, you probably want to keep the zipper close to the center of the finger pad. Otherwise, it's going to pop up and fall out a grasp or the bag itself will fall out of the grasp. So we reward for keeping the zipper relative to the center of the finger pad. And then this set of actions came about after some preliminary study, we tried more, but this was most efficient. So these are our five different actions. Then we have the robot try each action. What you'll see up in the upper left-hand corner is what's going on with the impedance electrodes. And then we have this classifier that tells us what region of the finger we're in relative to the zipper and then apply a reward or not. And the dots on the finger just to automate this state and reward system. But you can see there's a lot going on here with the finger pad defamation that we don't currently have the ability to model. So touch is expensive. But the whole point of this paper is that if you are learning while doing, that's a more efficient way of using tactile sensory data. Then once you've learned this policy, we can apply it to other novel bags and loading conditions. You'll notice that there's this motion planning delay at the time we just cared about showing the feasibility of this approach. We had no interest in making it going faster. So we're just gonna have to deal with that delay in these videos, which is why they're, they're all set up. But we have the previous bag wasn't McMaster-Carr bag, which I'm sure you're all familiar with. Now, this is a more flexible kitchen type bag when we loaded it in different ways. And we try to apply the same learn policy to wires and cables, but it wasn't very successful, successful. So that part did not generalize. More recently we have a paper on flipping a notebook page. And again, we were looking for a task where touch, especially find features of touch are particularly important. And so using lead this paper and we use canola arm, we have two biotech sensors, we have a binder, and the motion capture markers are just there to help with the automation of the rewards. But our reward function is based on the fluid pressure or the fingertip force, the electrode impedance. So what's going on with the finger pads shape, which you wouldn't think there's much going on when you're just flipping a page, but I'll show you on the next slide that there is. And then finally binder movement, where we would penalize if the binary was moved too much. Because we were trying to demonstrate one of the first uses of tactile driven reinforcement learning. We had to come up with some different states or behaviors and pick one that was ideal. So our ideal case is a semicircular trajectory. So imagine that you know, the radius of the grass point on the page to the binding on the book. And if you want it to have the smoothest semicircular trajectory, you can in theory do that right off the bat. But what if you do not know the particulars about this page? And you just start learning, if you, if you are using too small of a radius, you're actually holding the page too close to the binding. You get some page warping and then you'll see some snapping at the end. If you attempt a radius that is too large, now you're pulling the binder or lifting it up off the table. And so we call those the undesirable behavior. So we had to create this artificial ideal, an ideal case to demonstrate this. I'll play these videos. I think the most interesting behaviors, or if you watch the warping trajectory. So you can visually see the warping. And then you see that snap of the page. The page flips over. Now for each of these behaviors, you can collect rich tactile sensor data. And this is just showing you four different electrodes telling you about finger pad deformation. You don't have to look into all the details of this, but I wanted to show you that the top finger here in this row experiences something different from the bottom finger. I'm sure if you turn the wrist a little bit, it'd be even different from this. So this is going back to my point that tactile sensor data are hard to share across groups because every experiment is so unique. Then within each finger, just looking at different regions of the finger, they experienced different things. And then if you pick one finger and one electrode, depending on that behavior, you get three different behaviors. So that was the key, that's what you spent most of his time on was finding something distinguishable in the title sensor data that distinguishes these three states that we care about. And once he did, You can see in these grey regions here, this is where a lot of the snapping happens. And you can see that in these little spikes and the blue, his reward function is crafted to try and get to the smooth approximate these smooth green trajectories that we call ideal. And once he learned a policy, he then showed in that paper that he could adapt that policy to different contexts. And we use page sizes, contexts. So on the top row you have small pages, on the bottom row you have large pages. The first column is the very first policy update, and then the last column is where we ended. But basically, if you don't know the size of the page, you're going to learn through touch only what is the appropriate movement for that particular contexts. Alright, I've got two more points about touch. The very brief one is that touches shareable. So everything I've shown you is really about developing capabilities for robots to do semi-autonomous things. But you can also send the tactical data to a remote operator. So this is me at 20:22 trying out this tactile tele robot that made it to the avatar X prize finals. And I'm wearing these haptics gloves that use little pneumatic bladders to press against my fingers when the bile attacks here or pressing against an object and it's really difficult to do. You really wish you had vision to do this type of stuff. But the point is you can take the Tesla sensor data and have the robot do some things semitones, or you can send it somewhere else. So now we come back to a follow up six years later in Dustin Tyler's lab. I think the finger, your library man. So we come to the last comment I have to make about touch, which is that touch is connection. I've spent most of my professor career thinking about touch, but from a very task oriented perspective, what is the type of touch information you need to put the pig and the whole are close this bag or whatever, but not from the social perspective and through a confluence of factors. Pandemic, the fact that we're not no hugs, no handshakes for a long time. There is a real void created without touch as a means of connection with other people. So that's a direction that my lab is heading now. To be clear, affective touch has existed for a long time. It's not brand new. It's just that my lab is just now beginning to get into that. But this idea of touch for connections with new little ones or your loved one or grandchildren. We had the honor of being highlighted amongst many other labs in this cover story, National Geographic on the power of touch. And Cynthia gurney who wrote the article and then Johnson, you take the photographs. They made this the cover story because that critical period of bonding between the mother and the newborn, if you haven't heard of it yet, that direct skin to skin contact as soon as the baby is born is believed to be very important for building these connections. So that's why they picked that for the cover. But there are all these other aspects of touch that are not just task-oriented is the point. So as part of this, UCLA and some other institutions are part of this human fusions institute that doesn't entirely created at Case Western Reserve University. And if anyone is interested in joining, feel free to reach out to me. I'll connect you with Dustin. But the point is that we're directly connecting human and robot experiences using what does this team calls new reality, to enable humans to be physically in one place and experientially in another. Again, this is not a new idea. The idea of tele existence has existed for decades from Japan. But I think what people are thinking about more is not just using a remote robot to perform a task, but actually using a remote robot to have social connection with someone. Really have the person feel like they're not using a tool. They are using their own body in this other location. The thing about neural reality is it's not to say that you actually have to make everything realistic because you can imagine taking lidar intermediate proximity information and sending that to someone's nervous system. That could be their new reality through which they learn to operate. Even though, as far as I know, we don't have proximity sensors. So here's a robust system. Charlie came to the rescue again, really have one of these stretch robots in our lab as part of a different collaborations. So it was readily available for this avatar X prize that we entered into. And then in Cleveland we have an operator system. Again, there are many different approaches that you can use. For the robotic system. We purposely went for something that human is humans safe, is a low-profile, easily scalable because we have visions beyond the avatar X prize competition. Similarly for Dustin Tyler's group, we're both very interested in health care applications. While he does do experiments with peripheral nerve stimulation, you realize that not everyone has that capability or desire to have some intrusive device like that. So he's also developing wearables. This particular one uses electro tactile stimulation. People always ask is, does it feel natural? And the answer is no. But my response from my practical, with my practical engineering hat is, is it useful information? So by combining maybe some visual audio and tactile data, even if the tactile data is not as natural as your own tactile data, but those data are synchronized and it's useful for performing a handshake or whatever it is that you're doing. That that's good enough for me. So we connected this stretch system in an attempt to create an immersive telling operation environment. But the goal is something that is seamless, full body control. So week we could have gone with exoskeletons and lots of third-person cameras, things like that. We're actually leveraging the built-in hand tracking capabilities, which is why the operators wearing a white glove to help with that visual contrast. But we're going after multimodal feedback. So this was our entry. We made it to the semifinals and Florida, which is fine by me. We got in, we create it, we bonded, we created a collaboration and now we're continuing it. After this competition is over. So this is Dustin interacting with a judge. I think one of the most unique things and scary things from our perspective as a competitor is that you do not operate your own robot in a competition. Judges who are experts in the field who've been recruited operate your robot. You have an hour to train them. If it's not intuitive or they don't learn what you've trained them in that hour. That's it. You have to watch for the next hour as things go wrong and you can't do anything about it. So this is Dustin during the training period. And there's a judge here that will take his place observing how he's interacting. And then there's a judge in a different room that is embodying this robot, that is learning how to operate the robot as Dustin presents different things and shows them. Look down at your hand. I'm going to shake your hands. Can you grab this object and so on. You only have an hour for that. With this very slim robot, we focused on vision and touch. And we actually strapped a binocular camera on and skip the Intel real sense because we weren't trying to develop any semi-autonomous behaviors. We wanted binocular stereovision data to go directly into the oculus of the operator. And it's pretty compelling when you're the person interacting with the robot, they look like ice. So one of my favorite moments is when my students says, okay, we're about to connect. And then also on the robot comes to life. We don't filter out any movements and it really feels like someone is there. Like you actually have to think about things like Perry personal space. Because we had the National Geographic writer in Cleveland and she was visiting me through the robot. And I popped in February, I said, Hey Cynthia, and she got startled because she didn't have the wide field of view. She didn't know I was gonna do that like their actual social things that we're going to have to think about when people begin embodying these robots in social situations. We also had this prosthetic hand. There, there are sensory eyes. They're probably the lowest spatial resolution, cheapest type of sensor you could put on. But that's what we had to work with. The ones in the fingertips we're built-in. I would love to have more sensors for the palm of the hand because not everything we do is the fingertips. And so just to even do a handshake, we needed more sensing. So we put additional LFSRs on the proximal palmar aspects of the digits. There are some tactile sensors already built into this particular hand in the palm. And then we send that to this operator who's been trained for an hour and then has to perform these different tasks. There is a jigsaw puzzle task. There was feeling this vase and reporting on texture. There was picking up a mug and doing a toast. For the actual finals they went more aerospace and nasa directed. So you go up and you get, you learn a mission from someone. So there's that interaction. Then once you understand your mission, you drive over and you're doing things like using a drill, finding the roughest rock that's supposed to be for fuel or picking up different fuel cells and putting them in different slots. So, but the point is that vision and touch are all very important for situational awareness. And We're looking forward to extending this. We've got some ON our funding coming for the next two years to take our current capabilities but put them onto ON are irrelevant hardware, your Navy relevant hardware, we're gun Navy relevant tasks and things like that. So that's pretty exciting. I thought in the last slide, I will just leave you with some things that I think are exciting areas. So if you're looking for a new PhD topic, these are things that are unsolved problems, right? So I think that having proximity sensing local to the hand is very useful. And so this is from Jacob seagulls lab. They actually use the same unit to do proximity and tactile sensing. A computer vision based approach has also been recently published, where I imagined the skin. You can have it on the fingertip. You can also have a whole skin-like sleeve on the forearm. If you control the opacity of the skin and you look through the skin with a camera. It's a proximity sensor. If you make the skin opaque. And now you're looking at the deformation of the skin. Now you have a tactile sensors. So I think that's an exciting new direction. Seem to real. We'd love to have more ways of modeling tactile sensor and finger object interactions. And that would open, open up capabilities for model predictive control, which I know other labs do based on kinematics and computer vision, things like that. But we've kind of been shut out from that so far because we don't have a way to predict what would happen in the tactile sensor regime under different contact conditions. I think there's a new sub-field of robotics that's coming up now, which is social, physical, human-robot interaction. There's social human-robot interaction. You can think of robots for autism where there's no contact, but it's a robot that will interact with individuals on that spectrum. There's also physical human-robot interaction. Maybe your collaboratively lifting a table and reorienting the table. But then there's social physical interaction and Dr. Alexis block is a postdoc in my lab. And she, for her PhD with Dr. Katherine Cook and Becker built Huggy, but she built four versions of Huggy bought. And this robot is meant to give hugs and it seems trivial. But if you read her dissertation in her papers, there are so many subtle things. The timing, the force, the responsiveness, isn't it a height appropriate, all of that. And that's the social, physical, human-robot interaction. And she's very interested in, in extending that. Her research vision involves looking at wellness and mental health through robotics. And finally, I think as robot avatars become more common and they're not just used for things like nuclear decommissioning, that they could enable inclusion in our society. So there's a cafe in Tokyo where individuals with impairments from their home can login and embody a robot and be the waiter or waitress in a cafe and also interact with people. I believe that this is also a way to enable the full participation in society. A lot of our vulnerable populations. So what I don't show here are large area tactile sensing. Whenever our next projects is to develop tackle centers for the poem, I would love to have whole body tactile sensing. This is an area that people are interested in. Once you start scaling up the devices for collecting the information, you have to think about. How do you do the process signal processing? Federal agencies, it seems they're actually really excited right now about event-based neuromorphic tactile sensing. So you're not monitoring your tactile sensors all the time. You monitor them when something interesting happens, when something's spikes. So that's neuromorphic tactile sensing. Alright, well, I would not be able to give this talk without my collaborators, funding agencies, all the students that did all the really hard work. And I thank you for your attention. Great idea. So we'll have a few minutes for Q&A. So yeah. And I think you mentioned it briefly about like, how many sensors, what I have in like like per square, centimeter per square but I'm a human hand. Yeah. Just a human hand. Yeah. So in the literature it says you have about 2000 mechano transducers in one fingertip. So imagine that. That's why no one is trying to create these 10,000 textile type of artificial sensors. And actually, I get that question a lot. About how many taxes would it take to be like a human fingertip? And I like to turn the question around, which is, what's the minimum number of tax bills to encode the information that you need to make your decision. Because even in the page flipping tasks, we have 19 electrodes. But because my student had to do all this processing and then put it through the RL learning algorithm. 19 was too much. So he cut up the, the finger pad into six regions and use PCA because he had too much data. So again, I think it's about the type of data and making sure you get just enough information. You don't need to go overboard and have hundreds of thousands. That's what my question is. Kind of getting x I imagined the whole body. You have so many thousands, right? And processing perspective. That's a lot. Thank you. Yeah. Any questions? Okay. So I'm familiar with within best tactile sensors like Jill site and kind of looking into research into that. But you have also mentioned a lot, a lot of different types of tactile sensors. So what do you think, in your opinion? How the other sensors can, what can be the shortcomings of the density tactile sensors and what are the sensors that can? Yes. So there's no one perfect tactile sensor. I think that you have an array of, you know, Different types to choose from and all of them have their pros and cons. So e.g. you can get really beautiful high spatial resolution from a vision based tactile sensor. But right now, you're limited to about 30 frames per second. If you care about vibration, maybe that's not the best way to go. There are other complimentary sensory types throw on an acoustic sensor or an IMU or something like that. So it doesn't just have to be the types of tactile sensors that I showed. You can have a suite of sensors. But the point is that there are things beyond just the traditional third-person camera perspective that can give you information to provide situational awareness to the robot or a remote operator. Just this one follow-up question is that if you want to go in like high, high for sensitive detectors, sensing, which can detect high forces as well. Yes. So what kind of sensors are you? Great question. If you think about light touch, the literature suggests that light touch happens on the order of 0.1 newtons. And so a lot of our work uses tactile sensors that can detect that likes the biotech, like the microfluidics skin. But now if you want to do something underwater, like the underwater robot I showed, they have to grip up to 200 newtons. So you have to ask yourself, what is the application? Are you really doing light touch, haptic exploration deep undersea? Are you grabbing objects and pulling on the rope and retrieving things. So again, they're gonna have to be sacrifices. You're not gonna be able to check all of the boxes. But if you can check the boxes to complete the task, I think that's what you should be with my practical engineer hat. That's, that's what I would like to shoot where of course there's the scientific hat that I wear where I would love to better understand how we can create something artificial that is as beautiful and multi-modal, intricate as what's in biology. But I think those have to be parallel things that our community explorers at the same time. Any questions? Hi, my question was regarding the granular media and like, I was wondering if there could be any way to characterize granular media based on tactile sensory inputs. And probably there could be that could be extended to dual phase media as well. And could there be a general model that could do it in the future? Let me show you this. So this 2021 paper showed a variety of different tactile or different granular media types. If you're interested in different particle diameter is we're simulating coarse sand all the way to coarse gravel, so that paper is out. I have another student working on a paper looking at the jamming properties within them. So I think again, going back to your application, what are the types of media that you expect if you're doing a search and rescue robot might want to do a bunch of experience with snow packed at different amounts of different densities, e.g. would it be possible just as a follow-up question, would it be possible to use probably synthetic data generation techniques? In this case, we did go down that path. And then we got lost. You, there are modeling ways to model granular media and we actually looked at what was in the literature for modeling and in manufacturing context. Granular media that are being processed. And as they come off the conveyor belt, they start to pile and things like that. And then we realized, yeah, we don't want to go that way. Thank you. Just modeling rope, e.g. a. Couple of years ago, I think we were limited to ball and stick models. And now we have, my collaborator, colleague has discrete element rod models, the street element, thin shell and plate models. And so that's some of the work that students are beginning to work on now because I used to do models when I was a PC student, my whole PhD was modeling the human thumb from a robotics perspective. So it's not that I don't like models. I did them for five-and-a-half years and then came to the conclusion that all models are wrong. And when I tried to transfer my skills from the biomechanics area that I was trained into robotics. Robotics reviewers said, Where are your experiments? And so that's when I realized, okay, got it to stick into doing these actual experiments. Model what you can when you can. But a lot of times you need tire meets the road type of data to really convince the community. Thank you. Following up a little bit on that line of thought. A lot of the material that are in robots are really simpler than the human fingers. And you have solid mechanics which can relate pressures and inside that material to the deformation on the outside. Is there any interests are action to use that as a way to recognize what you're grasping. You mean how the use that can you describe that? Again? That would be the pressures inside the solid materials? Yes. I don t know that people have measured internal strain like that, but I do know that with newfound interests in soft robotics, a lot of them are pneumatically driven. And so researchers are beginning to use that pneumatic line. Not only the actuation but also the sensing. But again, it's a little bit crude. You can tell you're in contact with something, but the reason for a change in pressure in the line could happen anywhere along the line. But it's similar to your idea, but not measuring it through a bulk material from the inside. Thank you. Thank you.