So let's get started. So thank you for attending the iron seminar today. So today we will have Dr. Elisa or camera from Stanford as our guest speaker. Dr. Cabrera resale the budget degree from UC Berkeley in 1994 and a mass or a PhD degree from Stanford. She's Colonel Richard Wilson, Professor of Engineering at Stanford University in the Mechanical Engineering Department, was a courtesy appointment in computer science. Previous SE was Professor and Vice Chair Mechanical Engineering at Johns Hopkins. Data or camera, has made a significant contribution to that general robotics community, including serving as co-chair of arrows and editor in chief of REO and many, many. You can remember it. And she received many prestigious awards, including the I triple E engineering and medicine and biological, Biology Science Technical Achievement Award. And I Tripoli robotics automation started Distinguished Service Award. Again, many, many Award. And she is one of the pioneers in many research field including medical robotics, software robots and haptics, rehabilitation and prosthetics, and also engineering education. So today she will give a talk on software bug Spock humanity. So please join me in welcoming Dr. Nakamura. Awesome. Thanks so much for hosting me UA and thanks everyone for coming. I usually when I come and give a talk, I only spend one day, but here I find it a day and a half, a little over than that because that's how much you need. Probably I need five days to have a reasonable visit to Georgia Tech because there's so much amazing stuff going on here. So I really appreciate all the labs again, students and faculty that gave me tours over the last day and a half. So I ask you, what should I talk about today? Because there are these different topics that my lab works on. And it's, it's, it's heart wrenching because there's amazing biomechanics and rehab folks and I could talk about that stuff. I could talk about soft robots or haptics. But I decided to focus on, on soft robotics because this is an area that I'm especially passionate right now in terms of making soft robots that are not only a proof of concepts for new techniques and new materials methods, but also things that I think we'll get out into the world hand and help people. I'm not making a big claim that I've done that yet in terms of helping people, but that's really where we'd like to go. And so hopefully here you'll see a bit of a roadmap in a series of projects that we worked on that are taking us toward that direction. So since I know that this is a pretty broad robotics audience here, I thought I'd just give you my definition for what a soft robot is because there is quite a spectrum. So this is a relatively old science robotics paper figure from LacI and collaborators. Were they really characterized soft robotics being from things that might be some flexible elements, but being mostly stiff all the way to robots where even the computation is soft and maybe use fluidic logic and chemical reactions in order to make the robot smart and do things. So soft robots can be anywhere along this spectrum. And one of the things I think to keep in mind when you see the examples of the types of soft robots we work on, is that they can be soft by material, but that doesn't necessarily mean you have to be using materials with great elasticity, doesn't all have to be elastomeric base robots. You can also have things that are soft by structure because they have locally compliant elements. And then even when you have materials that are soft, you can think of them as soft because they're flexible or because they're stretchable or both. And most of the things that we work on are actually not that stretchy. We see that there are some real challenges with working with elastomeric base materials. If you want to have impact on the real-world, you have to be able to apply forces. That's the point of it being a physical robot. And so you'll see a lot of robots that are based on materials that are flexible, but purposefully not stretchable. And well, people often say that these robots are safe. That's not inherently true. We need to work to make that happen. They're also not inherently adaptable. We need to work that make, make, make that happen. And the same thing with costs. So these benefits can include these things. But I feel like we should never assume as some people do claim and the soft robotics community, that all of these things are true. They have, they have to be made true by design. So I thought I'd review three different types of projects today. And I have to say this is a very eye candy video. Have you talk of a single equation in here? But hopefully it'll give everyone a good intuition for the work that we do. Ranging and firms the little bit of an intro and patient-specific medical robotics, which is where we started when working with flexible devices. Towards soft robots that grow by tip aversion and then towards soft wearable haptic devices. So I just want to kick this off by saying that there is a long history of continuing robots. I don't think I've done a great job in this talk of having references before every topic. So let me just say overall that there's been an amazing amount of work in this field using different types of materials, really amazing controllers and planners, all sorts of contributions. And yet all of these challenges remain. And so anybody thinking about getting into this field, there is plenty left to do. But we're also standing on the shoulders of, of giants here. Another thing I want to note is that a lot of these soft robots are based on biological inspiration. So in particular, the soft growing robots that I'll talk about for a good portion today is based on biological inspiration. And folks here at Georgia Tech ranging from Dan Goldman's lab to David who's lab, are really inspiring mechanical engineers and roboticists to, to build on concepts. And in particular, one of our robots is based on the idea of vines, something that grow that the tip, we don't use the exact same mechanism. So this is not biomedic, but rather bio-inspired. But to start with the kind of medical robotics application, I would say one of the first soft, flexible robots that we worked on was this concentric tube robot. And this is something that, UH, one here has made great contributions to, especially in terms of control and planning. And we're saying you as my academic grandchild was made me feel awfully all. But this is the work of Bob Webster when he was my PhD student. And let me get that replaying. The idea here is that you'd like to have a robot that has a really thin profile. So if you're gonna do this for surgery, you'd like to have the thinnest, least invasive possible structure for your robot. And so in order to do this, we use the inherent material properties of nitinol tubes, which are super elastic. Lot people, when they think nitinol, they think it's a shape memory alloy, which it is. But we're using it for its super elastic properties, which means it can bend a lot without getting plastic deformation. With these types of robots, what you can do is take concentric tubes that are pre bent. And as you insert and spin these tombs with respect to each other, you can get this kind of tentacle like behavior, or it depends on who you are. Some people see an elephant's trunk that when people see a snake, this one's a little more tentacle like. But the idea is how do we kind of cleverly used materials and the inherent mechanical properties of the components in order to get these interesting, interesting multi-device of freedom behaviors. So this is something we worked on quite awhile. And of course, the goal is medical robotics. But in this particular case, let me describe how we're kind of making it more human-centered. And I'll start just with an example. So I'm a mechanical engineer, but as cars become more complex, more computer controlled, less accessible to me, like, I don t know how to fix my car anymore. Maybe it depends what kind of car you have, who he's not the kind of cars we have. Similarly in the medical field, there's this history of clinicians ranging from surgeons to nurses, who mechanically went around and design their own medical devices to meet the needs at hand. But as we kind of abstract our technology from this low-level hands-on making capability. We lose the potential for these experts to come in and help in the design process. At least at a, at a tight loop with the actual procedures that are done. So here's an example about how we're trying to make this particular software robot BB part of humanity, which is starting out with 3D models of a particular patient and then having a virtual environment. And this can be a patient-specific planning process where the clinician or other medical professional can go and interact with that virtual environment and say, this is the path that I want to follow in order to hit a particular target, Say e.g. to get to a kidney stone and a pediatric patient where the organs are really small and tight and close together. Then in real time we can fabricate before we used to use nitinol, but lately we've been doing 3D printed out of soft materials or semisolid materials. These, these concentric tubes. And then we can add them on to conventional base robot structures and then be able to drive them. And usually we have a human in the loop tele operating these in most medical procedures. Although like many labs, we are interested also in making autonomous surgical robotics. This is an interesting example where of course you have the medical application, which is just human-centered. But how do we think about humans being involved in the design process? And I won't really touch on this later. But even with haptic devices as well as soft growing robots, we've been looking at these human in the loop design processes. And how do we rapidly fabricate devices that someone can design for whatever their own purposes are. And I should note that I'm going to highlight some of the work of former students who have gone on to faculty positions. So this was a PhD work of Tanya more emotive is now on the faculty at UC San Diego. So I mentioned that these, that these materials are 3D printed. And so thinking about new fabrication techniques is paramount to think making progress and soft robotics and 3D printing, of course, has revolutionized what we can do in robotics and many, many different ways. I wanted to point out that it's not just the, the tubes themselves that are 3D printed, but we can even make novel driving mechanisms, e.g. using a waffle gear type pattern to make it really compact, lightweight driving mechanisms. In this case, you have a six-degree of freedom robot run by a fairly small motors, and then they all Nest into a really compact package. You can either hold in your hand or attached to another base robot if you're using a large surgical robot. Also hidden pneumatic devices. We're often going to gloss over where the power comes from. But this is an example where we really upfront the design in terms of how do we, how do we drive such a robot? That was just a brief bit of intro and medical robotics where we started in these tentacle like devices. But I'm going to spend most of my talk talking about these soft going vine robot. And this is a project. It was really spearheaded by Elliot Hawkes when he was a postdoc in my lab and he's now on the faculty at UC Santa Barbara. And doing all kinds of amazing things like jumping robust the jump five stories high. But I'm not going to talk about that kind of thing today is that I'm going to talk about these soft green robots. And some of the stuff is very historical, like five years old or so, but then some of the latest things that haven't even been published yet. So in this project, what we do is we take a soft, thin-walled material. In this case, it's plastic like the consistency of a Ziploc bag, usually LD PE, low density polyethylene. And if we put it in a tube shape inside of itself, and then we apply air pressure or some other fluid pressure, then it extends out of the tip. This gives you this vine like behavior, kind of like a growth habit of trailing like these plants have. We've done a lot of fun things with these. So in order to get them outside the lab, you do need to package them and have a way of actually delivering power on this thing that can grow, in this case about the length of a football field. So we take this thin-walled tube and when it doesn't have air in it, it compresses to very small so you can spool it up and put it into a very compact base. This is a base right here. Let me see if I can get this thing running. So this is the entire base that has grown this robot. I'm not sure if you call it a robot yet. We'll get back to that later. That grows the length of a football field. This case, it's using its embodied intelligence as we in soft robotics like to say, it's just bouncing off the walls and growing wherever it needs to grow in order to get, to get somewhere. So it's not yet a very, very smart robot, but nonetheless, it could grow a very long distance and deliver something somewhere. Then it has all of these other capabilities which I'm gonna get to in future slides. So what would make it a robot? It's probably not just growing it, that's just the plastic bag filled with air. So if we want to make it actually into a robot, we need to have a way of steering it and using sensor feedback in order to control his position. And my lab has looked at a lot of different steering techniques. Many of these were driven by my former PhD student, Lauren blooming shine His now at Purdue. And what she and other students the lab has done is look at what are all the different ways you can steer. Sometimes to make the robots as compact as possible, it's useful to have a set direction change like up at the top where we had these sort of hooks that when they become unhooked would be a permanent direction change and you can't go back, but it's a very compact way of steering. And then the ones that we're showing here and videos are not looping the way I was hoping they would. This is one where we have a tendon and we've figured out that the geometric geometry and the kinematics for how do you route these tendons in order to get it to go into particular shapes and configurations. These are, these are set in advance, but then we have other actuators that have to be flexible enough to survive that eversion process, but still be able to create enough bending force in order to create movement. And in these cases, these actuators allow you to have reversible direction change. So this is actually autonomously steering towards this light. Very simple control algorithm just to try to keep it centered on the light. And you have four of these pneumatic tendons that when you fill them with air, they decrease their length and then it bends in that direction. So we can get the 2D steering plus the growth degree of freedom using that type of actuator. So more recently, we have been trying to really understand what is the optimal actuator to use for these types of robots. It turns out that with these kinds of robots there, this whole class of robots, which I would call inflated beam robots. You have this battle between applying air in the main body, which gives it structure and keeps it stiff. And then the bending actuator, which is trying to bend against that stiff center. So figuring out how to balance those pressures and how to structure a various pneumatic actuators to make this work right, has been, I think, a big challenge for all robotics researchers trying to do pneumatic control. We've recently moved beyond like my inherent skill set in robotics towards working closely with folks who can do modeling of complex interactions between different materials and figuring out e.g. in this case that a particular structure of these goblin pouch motors might work better than a different, a different type of structure. And also what materials can we optimize? So I think we're starting point, the starting point in this field where we can start to optimize the materials and actuator choice and pressure selections for a particular application. And what we want that robot to do. Another way in which we've recently, than thinking about how do we make these robots a little more agile, is to actually exploit dynamics. So this is work that's under review and collaboration with former Georgia Tech person karen LU, where we're trying to take under consideration that sometimes we don't have enough straying from these actuators to do the kind of movements that we want. If we want to get to a curve or hit a target where the robot has to swing up and touch something way up here. We just may never be able to get there with the types of pneumatic actuators and they might have power that will have on board. How do we integrate the dynamics of movement of the base of the robot, e.g. if, if it's hanging from a drone along with those soft robot actuators in order to get it to hit a particular target. Along with Carolyn has group, my graduate student has done some really nice work and trying to figure out, first of all, how do you model these systems in a dynamic way? Most soft robots have static models and not dynamic ones. But also how do you actually learn to control policy? And in this case, they use reinforcement learning from lots of iterations done in simulations based on the learned model. And then you do learn this policy that can achieve various tasks. And so our goal, although we haven't done it yet, eventually is to be able to put this on a drone. And the idea is that there are places where even drones can't go. Can you get close enough? Can you swing your vine robot in there and can then you then grow and drive down some desired path to do inspection, deliver something maybe in a search and rescue scenario or other applications. So one of the big challenges that we have in these robots, like I said, is to, in order to make them stiffer, you need to apply more pressure. So this inflated being concept, you apply more pressure, you get more stiffness if you make the robot bigger, that will also make it stiffer. So all the mechanical engineers in there, right? You take a cylinder, you make it bigger, it'll get more stuff. And then this is not our work. This is other words just for reference. Even things that came out before we did our soft growing robot just shows some really big inflated being robots. This one is probably about this big around or maybe even bigger. This is a really cool robot. I think it has a helium inside of it to help it go up. For plates. They putting out fires, e.g. and this is one type of application for these inflated beam robots. They're often driven in these cases by tendons in order to create, to create bends. We're really interested in how do you actually make these types of robots differ at the smaller scale without having to scale up in size. So one way to get an increased resistance to cantilevered loads is to use a technique we call layer jamming. And I apologize, I thought I had a slide in here about it, but since I don't, I'll describe it. You might also be familiar with particle jamming and I'll have an example of that later. But the idea with layer jamming is if you have materials that are close together, if they have a lot of friction in between them, they resist bending, right? So if you vacuum out the air effectively in-between them, then they don't want to bend in there stiff, but if you inject air than they're very loose and they can bend easily. We're using this layer jamming approach to increase the stiffness of the soft going robots. And when they're spooled up, the air and air, positive or negative pressure is taken away such that it can easily, easily curve and have this very small stowed configuration in the base. And this is a nice kind of stiffness, stiffness modulating property that when you'd haven't made it stiff, it can still be flexible enough to undergo the inversion process and grow as shown here. And once again. Using SVM models and being able to show both experimentally and optimizing the shapes of these stiffened regions in order to get certain desired stiffness properties of certain locations on the robot. And it's important for us not to stiffen the whole robot. Because if you do that, you can think of an entire different way of steering. So let's imagine that you have these different sections of the robot and you start, well, that's animating. I didn't expect that to happen. Okay. So let's say you start by growing from the tip and then you want to aim it in a particular direction. So you could then rotate the base. You could do is you can make a particular region less stiff. And if you use one of these tendon pool cables or really any of the steering methods. It is going to bend theoretically at, at a joint. So you can kind of think of these as virtual joints by stiffening at particular locations along the robot, you get a place where it's more likely, it's like the minimum energy state, it's more likely to bend at that location. So this is a kind of low actuation count method of steering. And in fact, my grad student, Brian, who I think it's skipped over his slide. See if I can get this shown here. Really didn't want to show that. We try it one more time. There we go. There's Brian. He calls these activators rather than actuators and other people and mechanical dream might be familiar with this concept. You have a property of an actual, What's an actuator that actually doesn't do any work, so becomes an activator. So this stiffening and Stephanie and behavior is really an activator which then enables enables you to really lower the number of powered actuators that are required and therefore the overall power required by the system. So these, these virtual joint techniques, scooting back forward. Let you create these different robot shapes with a really minimum of actuators. And so with Karen Lewis group, they do the planning. We're not really planning people. They're figuring it out if you want to hit a certain target. What? I hope this doesn't keep doing this the whole presentation I might have to step out and see. But when we add, go back one more time, when we have these different targets, we figure out not only the optimal series of actuations and activations, but also what range of designs like what lengths of the stiffening elements will actually make that happen? Sorry, my slides are doing this. Alright. It wants me to move on. So I'll move on from stiffening robots. This video will play a while. So let me, let me talk. So as I mentioned at the beginning, one of our goals is to really get these robots out of the lab and into the field. And one of the first things that we did was really spearheaded by my former student, Margaret code has now at Notre Dame. And she was very adventurous and she took her vine robot to an archaeological expedition in Kevin Peru and actually put it into these locations where these underground tunnels which are then buried for a long time that have actually never been seen by human eyes. They've had, they've tried to send in other wheeled robots. And their best techniques so far is really a GoPro on the end of a long stick, but can't turn a 90 degree corner has all kinds of limitations. So we were able to send her van robot, which can go up to about 10 m deep into these environments. And this is a system where you could see her a minute ago carrying it on a wheelbarrow. Like the whole robot has to go down there. And it's really hard to imagine making a portable system that can have that kind of reach using any other method. So you do need power. So they do have power at the archaeological site. But all of the stuff kinda has to fit into a wheelbarrow in order for you to get it to the site and be able to implement. So that was super exciting to be able to go in and have a tip mount on the robot. I don't know if you saw that in any of the pictures, but with a camera on it that gets pushed forward. In this case, our tip amount was not very sophisticated. It's like a Fock, the tip of the robot. It just gets pushed forward, what its grows, and then you just make sure to wheel it back in when you're done. Alright, so that was one real-world application. The one that we're working on now is actually really fun. I'm near the main campus of Stanford, are really within the main campus of Stanford. There's an area which is the home to the endangered California tiger salamander, which is the cutest little salamander. There it is, up there. This is also one of the dirtiest projects my lab has ever gotten into, especially this year, it's a very wet and muddy. Out in California this year. But what we've been doing is taking our vine robots out to this area of campus and then putting them underground. So I don't know if you can see it, but right on the upper right is a hole in the ground. And we're going to Irvine robot into there. And once again, this whole system. Now it's actually much smaller. It even fits in a largest backpack, but not a wheelbarrow. And it's something that we can take around. And the reason why this is important is that the California tiger salamander, as I mentioned, is endangered. And there isn't a good way to characterize what their environment is like. What is the temperature and humidity in their holes, and also how is changing weather and climate affecting their ability to reproduce. So one of the things we're really excited about is that this year we have a ton of rain in California and we think we're going to have a boom in the population of California tiger salamanders. And that means we're going to increase the likelihood of actually observing them in their habitat when we go into these burrows. And that means we're gonna be able to start correlating their preferences in terms of temperature and humidity to their occupancy. So we are not the biologists, but we have amazing ecology and conservation biologists collaborators that are just so hungry for this data so they can understand What's happening to this and other subterranean species. So how are we accomplishing? So you might think, on one hand, it is really easy to grow vine robots through pipes. You don't really need to steer them overall because if you're going through a pipe or a tunnel, the tunnel walls kinda force it to go one direction or another. But you do have to make decisions at some branches. But at least that means you don't need to have a steering technique that runs the whole body of the vine robot, like I showed in some previous examples. We really just need steering at the tip and we are using some, some tendons steering within a flexible joint at the top. And part of the reason this tip mount device has multiple sections is that we need a camera, we need a temperature sensor, we knew humidity sensor, we need an IMU sensor. There's no GPS underground. So we need to fit all of these different sensors into it and we couldn't fit them all in one segment that wouldn't be so large that it would prevent it from following some of the turns and the boroughs. So we have a version in the lab to actually see what it's doing. But we've also done this underground and we're able to track its position and record data, which is very exciting. And just as an example that we can't really see it underground. This is an example of actually taking temperature as you go into the burrow. And you can see the outside temperature versus the temperature deep inside the borough is quite different. Yeah, that's exciting. And I told you that for pipes and Burroughs is relatively easy because you have this constraint of the environment, basically helping your robot go where it needs to go. But for vine robots in other types of applications. So one example is a contract we have with the FBI has interested in using it for inspection in places where there might be like a weapon of mass destruction. And so you might want to cut a hole into a mystery van and then put the robot in and have it go wherever. And in that case, we needed to lift up against gravity and we need it to be able to go have truly 3D motion. And we don't necessarily wanna do a whole lot of relying on pressing on the environment in order to get you where you want to go, then you might press on the wrong thing. If we want to control this thing in free-space, we need to go beyond tip steering. We need to go beyond having one actuator that goes along the whole length of the body as I showed before. Instead we need multi-segment. So what used to be a very simple and elegant idea is quickly getting more and more complicated. So in order to do multiple turns, we wanna do this without touching the environment. We want to have many degrees of freedom. I'm trying to avoid having many different parts and the vine that would make it wider. And we'd like to have stability without an act of air supply. This is where we're going to present at robust soft next week, next week, or the week after. Very soon. And I'm going to play this video and kind of talk you through it. Here. What we did is we invoked a tip steering method. But what we do is we just locally turn on the pouches that create bends in the robot. That now happens just as it grows. There's a mechanism in the tip. There's one pressure supply line for the whole body. And then at these different pouches we have these special magnetic valves that there's a magnet and the tip that can open a valve if we want to turn it on. And then when this valve turns on, then you can actually selectively turn on little pouches that will then create a band. And I had a fun time talking with some of Dan Goleman students about how this might mimic some, some types of plant growth. I told you before we had very typical tip melts. Now we have very complicated tip mounts. And it's complicated because the vine robot doesn't have a constant tip like the material keeps changing so the material has to be able to slide through. The tip mount. This is showing a magnet opening some valves. And then here you can see the tip Mount going along and then selectively turning on pouches in order to do a steering move. And when you retract, which I think the video will eventually show, maybe I'll skip forward to that. Yeah. When you retract, you can also turn, turn off and release the air from those passions as well. So this thing has motors in it that basically steer it backwards so that you can retract the vine robot as well. So you can get now lots of more complicated shapes that would actuate in free-space. We have not yet achieved 3D. We can do this on the ground, but we're going to need more pressure and more strength in order to be able to lift it up against gravity. And we've tried, it turns out that there's not enough volume of helium that you could put in this thing to counterbalance its own weight. I should say that this work was done in collaboration with rolling secrets group at ETH Zurich. The next thing to do is instead of just staring at the tip, is to think about what if we want to put just segments of steering and the robot? I don't think this is hugely groundbreaking, but it has really made us think about the manufacturing techniques for these. So over the last year, so we've gotten this really cool tool called an ultrasonic welder. Kinda looks like a sewing machine. And you can basically well materials together. People in the textile industry, it turns out, use this all the time. One of our goals now is to do these kind of unit body fabrications. But instead of my post-docs and it's doing it by hand here, like this should totally be automated. And so we'd like to have an ultrasonic welding technique. Where are we now automate the fabrication of manufacturing? And although vine robots are cheap, the materials are cheap. The human labor costs. Even graduate students and postdocs are very, very enormous. And so we really need to find ways as a whole in the software bugs community. But in particular, there's a great opportunity for these inflated being timed robots for us to come up with better, more efficient manufacturing techniques. And that's something we're aiming to work on next. Great, making good time. So I'm going to conclude this is the last section of the talk. Speaking about haptics and in particular soft wearable haptic devices. And the motivation for this is that haptic interfaces, interfaces that provides stimulation to the human using their sense of touch, can enable all sorts of useful communications. I'll hit at the end about social tasks. So human, human communication is really important. Many people realize this even more over the pandemic that we're missing, that aspect of human, human communication. But even now, it would be great for people to be able to communicate more by the sense of touch when they're remote. Human Asia communication, I like to, I like to run and sometimes I have my watch giving me directions. But it usually vibrates to tell me, oh, turn left or turn right. I have to look at the watch in order to see, is it telling me left or right? So can we increase the information throughput of these haptic devices to get useful information from intelligent agents and our world, as well as autonomous robots. If we're gonna be interacting and crossing physical spaces with robots, it may be useful to have this more natural form of communication that can tell you where things are located in your space. So when we first started designing soft haptic devices, this is the first thing we thought of. This is really fun, but it is not a good idea. The goal here was to create an active surface like literally a kind of digital clay with kudos to wane for pushing this digital clay aspect. Quite awhile ago. In this case, we said, well, maybe these soft surfaces can be used, in this case for medical simulation. We wanted to do palpation to find like hard lumps and soft tissue. And it may be works okay for that. But the model here is that in order to provide haptic feedback, you are literally trying to recreate the physical world. And that is just a really hard road to travel. Instead, in haptics, what we prefer to do is use haptic illusion. Tried to figure out how do we trick the human brain into sensing something from the environment that's not actually there and providing just enough stimulation at the right time and in the right place to do so. Just in case anyone's curious before I switch to how I think we shouldn't do it, just to say how this works. So this is a silicone elastomer material that's divided into four cells. Within each cell, it's filled with coffee grounds. And just like that layer jamming technique I talked about before, here we use particle jamming. Coffee grounds are great for this when you vacuum air out of a sealed chamber filled with coffee grounds, the coffee grounds locked together and they get really stiff. And people have used this for other types of cool surgical robots and other applications as well. But that's why it got hard is because it got vacuumed and then there was also air pressure from below. And that would give us a lot of degrees of freedom to try to do some crazy mimicking of soft and hard tissues in the human body. But nonetheless, not the approach we typically want to take with haptics. Because what we'd really like to do is utilize these haptic illusions. So one way to do that is to provide skin deformation and a way to do this. This is cheaper and more mobile than classical desktop. Rigid kinesthetic haptic devices that are more like robots. This case, this is like a robot, but you wear it on your fingertip. And as my students, Sam touches these virtual environments. He gets skin stimulation, not a net kinesthetic feedback, but local skin stimulation that mimics the contact that he would have with the environment if he was manipulating those objects directly. And it turns out that if you're lifting fairly light objects, like maybe a little bit lighter than this thing of water. You would mostly be relying on your cutaneous or skin based mechano receptors in order to sense the mechanical properties of that object and not muscle spindles, spindles and Golgi tendon organs or joint, or joint movements. So it turns out that as a haptic illusion, this skin deformation is a great idea. But those are really, really hard to make. They're tiny. They involve putting little tiny pins and the little tiny holes. And so our next step was, well, maybe we should try to make an origami haptic device, one that lays flat and then you fold it up. And it turns out this was a pretty good idea. It was easier to make. However, that still the whole thing was overall bulkier than we would have liked to have in order to truly explore the ability of someone to unencumbered, manipulate a virtual environment and especially put them on multiple fingers at the same time. The origami solution is where we're getting there and we're continuing to try to improve this. But what I want to talk about today is using fully 3D printed soft haptic devices that use some of these kind of origami approaches combined with pneumatic soft, soft robotics approaches in order to get a multi degree of freedom system. And my postdoc scientists did this work, TBD because he's interviewing for faculty jobs right now. But hopefully you'll, you'll see him as soon in a faculty position. And this is an example of what we're calling fingerprint. It's a monolithic lead 3D printed haptic device done on a Form Labs three for anybody, that's a Form Labs 3D printer fan. One of their standard resins, which is kind of semi soft. That is, if you make a piece of that material very thick, It's basically rigid, but you can also print very thinly and that material will bend and flex. So it kind of becomes like the flexor elements in an origami device. Unlike the other devices I showed on the last slide, which were three degrees of freedom. This one has four degrees of freedom, can provide some local skin stretch and pressure as well as vibration. It has this kind of monolithic construction. You just print the whole thing. Saying just prints might be a bit of an oversimplification like you do have to clean it out. It's not that easy, but it is fully printed at one time out of one material. It just requires some cleaning steps afterwards. Even though it looks and is a soft robot which is driven by pneumatics. And in fact, the actuators here, which I'll show on the next slide are driven by vacuum rather than positive pressure. Underline, it really is that it's an origami mechanism with folding actuators rather than stretching actuators. It's really small and lightweight. If you don't count the giant building compressor that is driving it. So what does this look like in a little bit more detail? So this is what the whole device looks like in a nice CAD drawing. And just some things to point out. So you have to have channels for the air that gets embedded in the thicker parts of the material because we're vacuuming air out. Vacuum is a little tougher than positive pressure because if you vacuum, you can collapse the channels that you're trying to use to put the air in. So those all have to be in a very thick area. It can easily be worn on the fingertip and because it's a little bit soft, it's actually very forgiving for a number of different fingers sizes. Then it was hard to see from the pictures on the previous slide. But there's the vector which is the point that contacts the skin. Just like the other one I showed that had that the red.in the middle, it contacts the finger pad right here. So it's as effective as those kind of traditionally constructed robotic fingertip devices. But now it is based on this softer material. So the origami for anybody that's a fan of these kinds of devices and wonders out what is the fold pattern? Well, there's a famous mechanism known as though the water bomb. Basically it's to make a thing that you can fill with water if you make a little origami thing out of paper. And so we use the same fold pattern to have those degrees of freedom. And then you get these different roll pitch yaw degrees of freedom. And if you assemble them all correctly, then you can get the overall four degrees of freedom of motion that you want. For a close-up of each, each pneumatic actuator, it looks like this. You vacuum air out and essentially it folds closed. And again, we're using a judicious selection of material thicknesses in order to get the behavior that we want so that we can make it out of a single type of material. The thing does pretty well compared to basically any wearable haptic device you can imagine. It can get forces on the order of newtons in x and y, which is actually pretty good. We're very sensitive in shear were actually a little less sensitive. And the normal force, which is the z-direction, actually works out pretty well for this device because this is where this device is really powerful. Because in this normal direction, all of these pneumatic actuators can work in parallel and you can get tens of newtons of force. So this is really exciting. The caveat that we are attached to a building compressor. So we are connecting it now to portable pumps. It's still not great because you have to carry them along with you. So you really have to be cognizant of folks. They say, Oh, I made this tiny haptic device or soft robot. Then if it's required to be attached to air, that is really going to limit its applications. I think that's another thing in general that we're exploring is how do you use some clever techniques to really make these things portable? So far I've been talking about the fingertips, but there are a lot of reasons to go away from the fingertips and put haptic devices on other parts of the body. What is generally for wearability and portability. But another is we're very interested in augmented reality and augmented reality from a visual point of view, you see your visual world and you get visual overlays that kind of cover up the real-world. The problem with doing this for haptics and the fingertips is that if I want to touch this desk, but then I want to somehow have augmented haptic feedback. I can't have that at the fingertips because then I can know hunger touch the desk. So what we've been exploring in my lab is how do we put haptic devices on other locations of the body, both in general for portability, but also how can we use this for augmented reality so that we touch things in there. We'll grow their fingertips and we augment that with additional haptic feedback at these other body locations. This is difficult for a couple of reasons. One, and these purple dots are not meant to signify some kind of disease. They're supposed to demonstrate the distribution, the density of mechano receptors, sensing within our skin. And we have very dense coverage on the palm and the fingers. This is known as the glamorous or non-hairy skin, and we have much more sparse distribution and other locations like the forearm. And then another challenge, which is maybe a little more subtle, is that we are very used to explain the world around us by active exploration with our fingertips. I don't learn about the mechanical properties of this table by leaning over and rubbing my forearm against it. My body is not designed to do that kinematically. But also the mechano receptors are. So one of the things that makes haptic devices general, so challenging is that you have to let a person explore and then the haptic feedback has to come in, in a way that's appropriate for whatever exploration that human wants to do. You typically don't want to constrain. You can only touch it this way. That becomes really unnatural. So this sort of passive versus active touch means that at least for this augmented reality idea, we're doing active touch with our fingers. But the feedback is coming elsewhere and we're not really sure how well that is going to work. So I would say that's still an open question. But we're trying to drive towards technology that will help us answer this active versus passive touched question. So soft wearable haptic devices that now go off the fingertips. What might those look like? These are both pretty old projects, 2,019.20, 18 respectively, where we tried to take other ideas in soft robotics and put them out there. So this is one where we use the classic fiber reinforced soft robotic actuator. And the idea is you have a single point of contact and it pushes in multiple directions like we did on the fingertip. And then we also had this nice work by Nathaniel to try to repurpose our vine robot to grow out of a bracelet onto the arm, and then have distributed pneumatic pouches that could squeeze. And provide communication. And in this case, it's a single direction like it's squeezes or it provides pouches that inflate. But each one of those is very simple. It just provides a single direction. So these are a couple of approaches which were fun and kind of helped expand our ideas about this, but I don't really think they're practical. One of the big challenges I'll say here, in particular is that you need something to push against. So a beautiful soft robotic device now has this rigid frame around it, which is allowing us to have a grounding against which to push. It's not really a soft robot any longer. But with, with vanishes ideas. I showed you this, this fingerprint here. We found that we can put these kinds of things all over the body and they actually strap on quite easily. So we've moved them to the thumb and the index finger. The first time we showed this one, someone told us that looked like a diamond ring. And so now we know students like to swim around the lab on their fingertips. And then we have larger force versions that use a different mechanism that go on other locations in the body. So to put them here, we need to have even larger forces. Forces, these go up to 20 Newtons of pressure. That might sound like a lot, but it actually doesn't hurt at all. That's kind of what you need to be able to have a range of forces that you display that give a lot of information on the forearm. So this is where we're going. Dentist calls them haptic voxels or hydroxyls. Not sure if that's going to catch on, but we're, we're selling it. And the idea is, can we take these monolithic lead II pre 3D printed devices along with what we hope will be new techniques and portable pneumatic power to start being able to make multiple devices be mountain in the bodies in different locations. And when we do that, a couple of more slides, we're going to have to figure out a couple of things. So these are really open problems that I'm going to throw out here. One is understanding how do we map locations for this augmented reality problem that I talked about. If you touch something with these fingertips, where should the feedback come in? My suit? And Jasmine just published a paper in Iraq where she demonstrated that actually, it doesn't matter where you put the feedback. It turned out people could interpret all of them, but they did have a preference. They preferred a mapping, which was the one that was obvious to us from the beginning. You should make them geometrically aligned. So that's nice, but we would still like to understand a little better how do we optimize the placement? So that's ongoing work. Then another aspect of this work is social touched. I kind of preface all of this talk about haptics with the need for social touched during the pandemic. And this is still true. And so we've done some fun work where we've characterized social touches between people using a capacitive array sensor worn on the arm. And then we've mapped those social touches onto a wearable haptic device. This one's not really soft Other than the fact that their voice coil motor is mounted in a soft sleeve. Not really a soft device. But the idea is it's wearable. And it turns out that even with a very sparse distribution of stimulation locations, you can very well communicate a number of social touches, ranging from sadness and love, attention. I'll give some examples. Pay attention to me, is attention. I love you. Might be a stroke like this and it doesn't feel like individual touches. It will feel like a stroke if you get the timing right. And using our data-driven approach based on our sensing work here, we've been able to figure out what actually feels good, what feels continuous and pleasant and compelling to users. And this is work that's going to be ongoing. We're interested in more real-time tele operation of touch. In this case, we curated haptic emojis based on acquired data. So that concludes my story about soft robotics going from medicine to these vine robots. And now the software will haptic devices. Really want to thank all my awesome students and postdocs who, who did all the work and our sponsors and all of you for listening and I hope there's time for questions. Thanks. Oh, yes. Thank you. So I I've been following your vine robots for awhile. I was actually an intern with Margaret 2020. I don't remember how you have at least complicated cameras or magnetic valves that you can stick on that on the front of your mind robot, how do you control them? How do you pass wires? Is it all wireless? And how do you keep that all connected? And how do you Why routing is, is an issue that I've had for many of my soft robots. So I'm just curious. Yeah, absolutely. Great question. So the particular one that has the valves where we do this kind of addressable individual actuators. The valves are actually passive devices that are embedded in the vine robot itself. So all that work is done upfront. When you manufacture that robot, that every pouch has its own valve made, made Alex's life a living ****. Put that together. So that is really difficult, but it all comes on the body of the vine robot itself. And then that cap that you saw on that particular robot has batteries within it that drive a motor that push it forward and backward. Then also change the orientation of the magnets that switch on and off the passive valves that are already embedded in a body. So I hope I answered your question, but yeah, we always struggle with, we have this great robot, it grows forever. You think you could just put cables down the center, but you can't because the center actually goes forward twice as fast as the main body. You can all think about that a little bit. The geometry of it is such that the stuff inside goes twice this part. So it's still a challenge. And there's, I'm sure there can be lots more brilliant ideas out there to be discovered on how do we handle power and as we increase the complexity. Thanks for your question. Any questions? It would seem from you're getting your axes of rotation aligned in a desirable direction that you could rotate about the center line of the structure. So with my elbow, I if I wanted to go to the now, I have to rotate it this way and go this way. Does that is that incorporated in some way or do you mean for the vine robots or perhaps for write binary. But, oh, well, so most of our vine robots, like they steer in 3D by having three actuators, a line on the side. And so any two of those actuating can get it to bend in a particular direction. I'm not sure if that's what you're asking. Yeah. I guess I didn't feed or destroys the ability to rotate about the line. Yeah. I see what you mean. Yeah. No, we haven't we haven't tried to spin it. We're usually trying to avoid having a, having a spin. But that said, if we have the distributed actuation, the hope is that you can point in any direction and the sensors at the tip, if it's vision data, you can just rotate. It is the hope. But there might be some utility. I mean, we have created caps like tip mounts that have grippers on them. And then they've had a little motor that just rotated the gripper at the very tip, like a little wrist so that we can orient the gripper the right way. Any other questions? I'm just curious. How do you avoid having the tip fall off as you retracting? Yeah. So the first one is did fall off like there was like a sock and it would retract and it would come off. The newer ones, the material of the vine robot actually interdigitate with an inside piece and an outside piece. So it's like the outside pieces apart, like locks in and the material of the vine robot kind of snakes its way through that. So it's like physically impossible for it to come off. But that does increase friction and that means we need more pressure to grow and these sort of interlocking things. It's also a challenge when you have various steering mechanisms, like thicker and more complex the steering mechanism mechanism. And the more likely it's good to get gummed up in that mechanism. But that's the idea. That's cool. Thanks. Thanks. So I'll ask the last question. Folks, a lot about the mechanism, but I'm just curious if you want to reach tracker. So what's a protocol to wear? Just a vacuum in it and it's adjust it back. Yeah. Vacuum actually definitely work it just flat. Yeah. So one thing you will, you typically try to do to gracefully retract is our tip. Nouns have motors in it that kind of steer backwards and kind of shoved the material back inside. But you can also do it by just pulling on the tail which is inside. That doesn't work great when there's a tip mountain because usually there's too much friction. But other groups like my tallies group at UC San Diego has shown that, yeah, if you just pull on the tail and there's no tip mount, it will, it might, it might first do a weird thing, but eventually it will it will go back in. Okay. Thank you. Thank you. Okay. So let's Dr. or camera. Yeah. Okay. Thank you.