let me say some things about Aaron so the first thing to be said is he has an academic pedigree that my mother wishes I had he went first to Harvard and MIT and now he's at Yale so he sort of moved all the East Coast schools up through those with some facility he is an associate professor in a legalities one of all the Young Investigator awards and prizes that one could care to name NASA and DARPA NSF team was an RSS career spotlight young guy speaker not so long ago so he's widely recognized for the things he's done and you're going to hear about some stuff that he's done today you can read about the stuff that he's done and download the stuff that he's done and steal the stuff that he's done please see right at the front of the whole open everything movement in robotics if you want to design robot hands steal his design they're out on the web for free didn't participate in open robotics tracked down whatever injury and open life he's at the vanguard of making robotics be democratized in terms of not just the research also education he does robotics course where he's behind that ease if you think I wish I had a bunch of stuff for my robot to pick up but I just don't want to go shopping and IKEA Aaron will bail you buy some stuff from the the Yale Berkeley CMU project to get a common training set of things out in the world to do benchmarking you make our ideas more transferable and more comparable to one another the last thing I'll say before I let Eric starts to speak is if you want to do bio-inspired robotics you should do it the way Aaron does it which is to study the biology do the research on biology write about the biology bring the relevant concepts to robotics where you do to design in the mathematical verification in the investigation and you do it all in a serious way rather than to be a lightweight cherry picker the occasional pop science article about biology before writing about it in your robotics paper this is the real deal in terms of violence by robots I'm delighted to have him here let's welcome there Thanks well that pretty much sums up my talk so I'll take any questions all right thank you for the amazing introduction Seth so I stand here in a mechanical engineering department and I'm through and through mechanical engineer I am most of the problems that excite me at least began from my interest in mechanical design and we are those of us interested in mechanical design in the robotics community are a small but vibrant group we also now have a Technical Committee on mechanisms and design so if you are one of those please join our TC but I'm basically gonna be talking about basically good design of robot hands I call it mechanical intelligence because that sounds a whole lot cooler but basically looking at you know from the ground up how do you how you design hands to work well as well as possible from the ground up without the need for sensing and control and the other things that we associate with robotics so I call my lab the grabb lab it loosely stands for grasping and manipulation rehabilitation robotics and biomechanics the sensor is really robotic grasping and manipulation but because of that I do some interesting to me stuff on mechanisms in theory that doesn't usually play well to a seminar so I'm going to avoid that we do a lot of work on studying human manipulation and this is actually about half of my lab these days is really looking at how humans manipulate there's a lot of applications we actually began our work in that direction because as we were developing our hands we wanted to know well do they work well and you know from a benchmarking and performance comparison standpoint everybody wants the human hand when we think about you know what makes for a good grasping a manipulation system but almost very little work had been done on really quantifying human hand function understanding the various ways that humans use their hands especially from the standpoint of being able to put some hard numbers on things that we can use to compare our artificial hands to then you know that naturally plays into upper limb prosthetics I'll touch on that briefly today and then stuff I won't talk about I've done a lot of other things you know you get neat ideas and that's one of the great parts of being a professor is you can just start exploring areas that you find interesting so we put hands on area robotics I did my postdoc in with uhare at MIT doing some lower limb exoskeleton stuff that I'm not working in anymore and then we do some stuff that sort of at the intersection of fabrication and modular robots that we call active cells I've had between 20 they have 12 full-time people in the lab right now postdocs and grad students these are some of them who did the did the actual work that I'm gonna talk about I don't think I have to go over this but just so everybody is that you know starting at the same place here what's the current state of the art and robot manipulation well we can't currently reliably physically interact in the grasping a manipulation sense with objects in unstructured environments so the key word there is unstructured and especially autonomously we've made great strides since I've made this slide but we still don't have the ability to just send a robot out into the home and even do the simple task of grasping objects they're lying around and a lot of what we think about in terms of robotics you know having Rosie the robot as our housekeeper is never gonna come into fruition without this ability you know think about what you would or would not be able to do without your hands that's you know the way I see manipulation in terms of of the need for robotics so why is it difficult there's a lot of reasons it's difficult I'm gonna tell you you know the stuff I'm focused on aspect of it one of the major reasons that it's difficult though has to do with both the variability of the types of things that you're trying to do with your hands but especially in the context of the uncertainty that comes about in real structured tasks so you know an assembly line very structured environments or narrow tasks is relatively easy to deal with but you know getting into these home environments and so forth which is really the end goal it just becomes too much so why is there uncertainty well I'm not going to get into all the details of you know the limitations in our sensing technologies and perception technologies and so forth but even if we were to drastically improve all of those the reality is there's going to be some amount of uncertainty and the properties of the object the location of the object that's going to make traditional approaches of doing you know highly controlled movements difficult you know the object is going to be 2 inches to the left and 20% larger than what you think it is and dealing with that kind of uncertainty or you know even a smaller amount of that type of uncertainty is going to be challenging I think the primary reason that's challenging and and and something that I think helps frame the way I approach this problem is thinking about things in terms of contacts and the constraints that they impose so controlling contact and forces in the presence of uncertainty is particularly difficult if you think about low-level control of actuators you know whenever you have a gearhead on an actuator at the end of the day you typically are doing position control and coupling and dealing with uncertainty in the sense of controlling position is is incredibly difficult you're either gonna be through the object or not even in contact with the object for instance but at the end of the day you either have unintended contact or large forces things break hands have this property that they tend to have lots of small and fragile components and they break regularly you drop the object and so forth and this is especially difficult when objects are stiff and we generally make our robots in things we want to interact with stiff and the you know here's a smattering of you know traditional hands which are very robotic in their construction made out of metal with you know rigid joints and so for not to say let's get that so the fundamental problem I see though with with this is that it I'll try to illustrate it here so think about this simple three link planar manipulator okay if this was a robot finger or a robot arm I could put an actuator on each of these joints and move it in free space I have a 3 degree free manipulator 3 actuators it's a simple problem ok the problem comes when you add the contact which then adds the constraint so your open-loop chain ends up becoming a closed-loop chain so if I take that same finger or arm or whatever you want to think about it with 3 actuators and I add this additional contact down here I go from a 3 degree of freedom mechanism down to a one degree of freedom closed loop chain so this four bar linkage has one degree of freedom if you aren't aware and controlling one degree of freedom with three actuators requires you to deal with that redundancy in in some kind of you know nice to think the way now if you had no uncertainty you could you could deal with it you could precisely coordinate all those actuators together and and control that one degree of freedom with three actuators but as soon as you add the uncertainty into that it becomes a much more difficult problem to to address and to me this sums up the challenges having to do with grasping and manipulation in a traditional robotic sense really nicely we have these nice fully actuated robot systems that work really well in the context of free space movement and so forth but as soon as you add additional constraints due to contact those mechanisms become over constrained and you have to deal with them from some kind of redundant you know control scheme which is challenging without the presence of uncertainty and in the presence of uncertainty has resulted in these things just generally not working at all so yeah I already summed that up you can do a control based approach like impedance control that addresses a lot of the issues it does work in certain senses but it still relies on sensing that can be noise perfect having those sensors there adds a lot of additional mechanical complexity cost as well as fragility those sensors are very easy to break and it really doesn't help much in impacts and transients and so forth so it would be really nice if you could address this from a design perspective so adding some softness compliance Springs helps basically what-what Springs can do is they can add degrees of freedom essentially into the mechanism so that you can address to some extent that redundancy by essentially adding passive or semi semi passive degrees of freedom into it into the mechanism and it does allow also some passive confirmation you know we we add Springs in or you know soft coatings to a lot of our our mechanisms to make these kinds of things more easy to deal with in the robotic sense you know you've got sort of the very soft Chris Atkinson robots but people have been doing this in the context of legged robotics you know springy legs like Rex for quite a while those actually mimic a lot of what was seen in in insects where so the passive compliance in the joints of of insects allow them to run passively over rough terrain just purely due to the the passive adaptability that the Springs provide and that's actually how I came into this problem was through a project like that I was a PhD sigma's Rob how at Harvard and we were funded under the sprawl robots Murie which involved Marc cacao ski he was the P I and Bob ful who is the the cockroach biologist and the idea was can we build robots that had springy joints in them that allowed them to sort of passively adapt to uneven uncertain terrain open-loop just passively through their mechanics and then there was a little throw on to the project that they thought maybe compliance in hands would have similar benefits allow you to sort of passively reject disturbances and for the whole Murie project I was the only person working on hands and this was literally my PhD was the this statement go try to try to investigate how compliance in hands can benefit passive adaptability so I started by building a finger and this wasn't the exact finger this was something slightly later but I did it in a way that I borrowed some of the ideas from the sprawl leader project with these flexible joints and the manufacturing process was this embedded shape deposition manufacturing where we had these components embedded but I had a tooling finger and I embedded a cable in the distal link and then I embedded a cable in the proximal link as well so I had two actuators and two two joints and as I was playing with that finger that I had built I realized that the distal cable I could pull on that distal cable and it would also make the the proximal joint move as well so I ended up just basically never using that proximal cable because I could actually both degrees of freedom of this finger with that single cable but it also had this really nice advantage or benefit where I can make contact on this proximal link here and I can still keep pulling that kick that cable until the distal link makes contact and I basically made two contacts or a controlled two degrees of freedom of the mechanism and passively conform to the shape of the object just due to the mechanics of this cable running in a differential fashion which I'll talk about in a minute across the two degrees of freedom of the mechanism so I was able to use a single actuator basically to control this two degrees of freedom of the hand and get this nice passive rat behavior so I started to push myself and think about well you know can I do this across multiple fingers and the idea sort of came about well what's the simplest effective hand I can design and so the challenge was basically to think about can I can I build a single actuator hand that can grasp the widest range of objects possible in a completely open loop way no sensing no control needed to establish the grasping you still are gonna need some high-level perception right when I talk about open loop I'm talking about low level control you still need to know where the object is you know to be able to grasp it and so forth but when it comes to actually in only thing the grass keep everything completely senseless if you will and open-loop so almost everything I'm going to show you in the videos from from now on is going to be open-loop unless I say otherwise so after spending six and a half years on my PhD working on this I came up with the design of what we called the SDM hand it's a four fingered hand basically four of those two linked fingers that I just showed you and there's a single actuator here I put it back at the elbow just because it was a heavy actuator we had laying around the lab that I used but then it runs differentially which I'll talk about in a minute to control all eight joints of the four fingered hand and in an open-loop manner and so I talked about the the the adaptability within a single finger so you can think about this representation here showing a single finger where I'm pulling that one tendon here and I can passively make contact on this link and then it continue to close until it makes contact on the distal link but I came up with this design of this floating pulley stage where it essentially enforces that there's an equal amount of tension in each of these tendons without specifying their position separately so essentially you can give and take tendon across these pulleys differentially so it can sort of route the displacement where it needs to go to have a passive rap behavior on the object and I can just take a single actuator at the base and pull on this tendon until it stops moving we literally just do a current limit on the on the actuator and run it to stall and you can get that wrap behavior across all eight joints of the hands so here's just a video of showing that in action I'm just running the actuator to stall I did a lot of design optimization to try to look at how to shape the lengths of the hand and the stiffnesses and so forth to make it adaptive to as many sizes and shapes of objects as possible notice also we're looking at positions Center that's important when you have positional error in your task but this is all open-loop it's just running the actuator to stall the same level every single time and the hand just nicely passively conforms to the shape of the object because of the fabrication process we got this nice soft behavior that critical I allowed the hands to be durable I'm gonna skip through that but that's really important when you're talking about grasping and manipulation in unstructured environments and then here's just an example showing doing some difficult grasping tasks so here what I'm doing is that's a WHAM robot arm I'm only using three degrees of freedom here so I'm essentially positioning the base of the hand in X Y Z and the orientation just comes because of the position and I'm telling you operating this using a phantom haptic interface nearby and so once I position the the base of the hand I just hit a button to implement the open loop closed behavior of the grasp so the motion of the arm is done teleoperated and then the the grasping is all done automatically you know sort of open-loop so you can see the passive compliance really helping in terms of dealing with the context with the table you know a lot of times when you're grasping objects they're going to be sitting on a table or having some other kind of constraints they're so passively conforming to that is really nice as well I'm gonna stop it for time but I had this nice you know demo grasping the glass of red wine that might spill all over the tablecloth I probably did it like 30 times and we had a DARPA program manager in and my adviser at the end of his meeting was like come and see this I want to show this to you Erin do the thing and and so I did it and of course I spilt the the wine the first and only time and we didn't get the funding but so we got funding later okay so I talked about so I call these hands under actuated and and that is a term that's used in the community it's under actuated in a different way than a lot of robot a lot of under a term the term under actuated is used in robotics for for these we basically mean there's a differential involved and you may have heard of differentials in the context of your car all cars basically involve differentials they're basically the solution that came about in like the 1940s to address this problem of routing a single input from the engine to multiple wheels you can do that in a fixed way and just have all the wheels couple together but as soon as you go around two bends the wheels on the inside have to spin at a lower rate than the wheels on the outside and you start to get slippage and you all know that slippage is bad when you're when you're driving so what a differential in your car does is it essentially enforces that there's an equal amount of torque on each of the wheels without specifying their velocity directly so there's sort of a relative velocity term there and and they can change their relative velocity with respect to one another you know enforced by that coupling term but basically what happens here similar to the application I'm talking about is the contact with the road so it's an under actuated mechanism there's two degrees of freedom in the two wheels if it's a two wheel drive and a single input so there's an extra degree of freedom there that's free and that degree of freedom is specified by contact with the road or the curvature of the road right so it just allows the wheels to turn at the rate they need to turn at in order to not cause slippage as you're passively going around bends so I talked about compliance and springiness and that helps to some extent it does help in dealing with conformation it does help by adding degrees of freedom but a differential transmission does a lot more so these the compliance degrees of freedom have this sort of soft constraint effect F equals KX that might not be an issue but you're stuck with it so you know you're going to be limited in terms of the forces you can apply or at the very least coupled to some sort of displacement technical springs have a practical you know range of motion in terms of the displacement that they can exhibit but the I think the biggest thing is at the end of the day you have the series compliance that you can't get away from and that's gonna limit all of these things with a differential the springs if you need them so in autumn of differential there's no springs at all in my differentials I use Springs as antagonist so as well as to be compliant before I tighten the tendons and so I can place the springs in parallel and that allows things to be soft and then once I actually the springs basically come out of the kinematic chain and I get a really tight grasp with the springs essentially not in the kinematic chain at all so this is sort of the difference between what I would consider soft robots or soft hands and differential or under actuated mechanisms so I came up with all that on my own and then realized that other people had done it before as well okay so credit where credit's due Shigeru Jose is one of the most brilliant mechanism designers in the robotics community I believe he's retired but still semi active he had this mechanism called the soft gripper which is shown up there and then more significantly Clement Gosselin at University of Louisville in Canada has been working in this area with linkage based transmissions doing similar things and I saw one today they have a company called robotic that does make under actuated hands from a linkage based perspective and then some people have been you know doing successive work in this area since then but just to summarize why does it work so contacts add constraints a fully constrained system fully actuated mechanism which most of our robot hardware is becomes over constrained at that contact and that makes it extremely difficult to control especially in the presence of uncertainty so from the design perspective then if you turn that on its end the main idea for the the work we do on hand design is try to think about how we can design effective hands that are under constrained when they are before contact such that when they do make contact then they become either exactly constrained or still remain under constraints so just a little lingo here are you gonna give me a time check or when I'm what am i shooting for this isn't necessarily established terminology but it's way it's the way I like to think about the the scope of grasping and manipulation so I like to think about grasping as essentially assembling an object to an arm where the fingers are basically trying to stabilize the object and then after you've grasped it you do any manipulation with the wrist of the arm okay so you know holding on stable II to the object and that has been discussed in terms of power grasping where you sort of wrap your digits around as much as possible usually to exert large forces or to grasp large objects or precision grasping which is usually fingertip based usually for small objects there's another spectrum of of hand base manipulation that we call within hand manipulation and this is basically where you're using your fingers to manipulate the object that you've already grasped and this is much more difficult and I'll talk about some of that it's all hard but this requires basically dealing with a lot of that highly redundant many closed-loop chains through finger into individualization but I think is really what enables you know why humans are so effective you know as a species you know everybody's heard that the human hand is really the largely responsible for you know why we are so effective and even intelligent as a species but it's because of the ability to do this within hand manipulation that we can do a lot of complicated tool making tasks and so forth the robotics community we're still mostly stuck up here and grasping but we're also you know my labs starting to look at within hand manipulation a bunch so the hand I did in my PhD work was a power grasping hand I'm gonna talk about applications in precision grasping so this is the same idea it's a different hand slightly different kinematic but essentially what we're doing the same the same context we're thinking about dealing with these contacts and grasping small objects using a hand that's under constraint and this is all done open-loop so we just servo the hand to a certain position and then execute this open-loop trajectory of the fingers and all that grasping that looks very complicated just happens nicely as a product of the mechanical design of the system this is all autonomous so we're using a Kinect overhead sensing the location only of the center of these objects and then serving the hand to that location and executing that open-loop sweep trajectory and this is all one continuous take I don't even think this is sped up if it is it might just be 2x I don't think it's sped up because we usually put that on our videos but you can see how robust this is and we have a paper on this you know evaluating it formally but it's just this very nice way of you know dealing with this incredibly difficult task that just comes about as a product of good mechanics good design okay so moving on to what we're working on in terms of within hand manipulated let me just start with that two fingered example so if you look at the hand up in the top right corner that's basically a representation of the of the two finger grasp that we just showed you and that hand has two fingers and there's actually an actuator for each finger for this hand so we have two controllable degrees of freedom you know one for each of the actuators and it turns out that there's this spectrum of outputs for the object so you have sort of grasp force on the object and then there's the center of the object in X X Y and then there's the orientation of that object now with two actuators we can at maximum control two of those those terms there's this you know manifold in the in the sort of XY theta space that we can manipulate the object over but the way we're able to do this is essentially that the in ISM has additional degrees of freedom that are able to come into play and passively adapt to keep contact with the object to keep a grasp force on the object without us needing to precisely control everything so here's just an example of grasping an object and manipulating it in a fingertip grasp within the hand and I'll get into a little bit more of that in a minute so this is just a really simple example this one's easy because if the ball it rolls on the on the surfaces that can create some problems that I don't have time to get into right now but there's a lot going on there and the mechanism is really making up for a lot of the uncertainty and error in our control of it I'm gonna mention this real quick so we participated in the DARPA arm program briefly arm had two tracks there was a hardware design track for which there were three teams and then there was a software track for which there was something like seven teams and the hardware track was designing hands and the software track was using existing hands so my team was actually led by iRobot we pulled a nya robot because we wanted a company to do the software that all the integration stuff and then my group did the hand design and Rob house group did the sensing largely everybody was all involved in everything though and this was basically the hand that we came up with so the question here was more thinking about what would be a multifunctional hand that we could design that still has you know the lowest level of complexity possible but gives you sort of as much functionality as possible and so this had five actuators there was one per finger so just like the fingers I showed you earlier there's a tendon anchored on the distal link and that runs passively to get some wrap passive wrap within the fingers and then we had an additional actuator that was able to take these two fingers and rotate them so that they oppose one another like this so that we can do those pinch grasp or within him manipulation movements or opposed the thumb while while to digit ating and the interdigitating is good for being able to grasp both big objects and large objects from a power grasping perspective then with the thumb we had an additional actuator that allowed us to come in and manipulate objects from the side that were held in a pinch craft so we can rotate rotate those so we ended up being the top team winning the the arm competition and here's just sort of a little highlights reel of that so you can see that same sort of power grasp almost all of this that's shown here is teleoperated or actually this one I think is a human holding it but this is where we came up with the idea or it wasn't completely novel of putting fingernails on it to be able to grasp the thinnest objects I don't know why they didn't show it in a single take here but we picked up the key inserted it into a lock turned the key to unlock the door and then unlocked unlock to the door handle all in one in one go this is autonomous again this is in my lab same idea we're using a Kinect overhead to sense the centroid of the object and here we're also sensing the long axis of the object if there is one and then serving the hand to that configuration you see how much positional error is there right it's all sort of taken up by the passive adaptability of the hand and then we're doing this little finger twiddle thing that helps to move objects from the fingertip to a palmar grasp which is an important transition for tool use for instance again really robust I'll skip to the yeah here's the the batting practice so this was before I put sensors in it because they wouldn't have withstood this and then poor man slow-mo here all right so the idea was that we were supposed to be delivering those hands to the software teams who were supposed to then be using them but DARPA Robotics Challenge happened Gill Pratt had taken over the program and and and started the DRC and all of the software teams ended up being finalists on DRC so it just sort of went away they did ship our hands with the Atlas robot to the DRC teams who got them so that's the left hand there has has one of our hands on it but you all probably know what happened with the DRC and manipulation ended up being almost an afterthought because of how difficult the locomotion aspects were okay and then the the students who worked on this so Lyle owner who was my postdoc at Yale and then late jente and yarrow Tenzer Harvard started right-hand robotics you may have heard of them they sell the reflex hand which is basically the version of the the DARPA hand there and you can buy that still but they're mostly focusing on bin picking but they still sell the hand for research applications I'm not going to get into detail with this we've done a lot of work on sort of random hand designs in especially in terms of thinking about within hand manipulation where we're really just exploring various modes of it it's a it's a very complicated multifaceted functionality that humans have and we're just trying to think about nifty ways to implement various within him manipulation modalities and while still wrapping our heads around what what we need to use it for and so forth yeah I'm just going to skip this okay so to summarize you know that whole contact the constraints thing really thinking about the overall degrees of freedom of the mechanism after contact really has given us a lot of great insight into the behaviors of these mechanisms as well as their capabilities and in thinking about within hand manipulation one analog that we thought about is parallel platforms so you're probably all familiar with the Delta platform or the Stuart platform these are by far the most successful parallel platform architectures the Stuart is used in a lot of flight simulation it's a six degree of freedom platform they'll put like a you know captain's cockpit up here and then manipulate it around it's very rigid the Delta is extremely fast there's very low inertia moving bits here it's a really effective way of doing picking place for manufacturing and assembly and so forth but the community of people interested in parallel mechanisms and parallel platforms there's all kinds of you know other other ideas out there but one of the things that is consistent across all of them is that these the the practical ones are exactly constrained and what I mean by that is if you have the output platform having six degrees of freedom the the parallel mechanism has exactly six actuators the Delta has a three degree of freedom output there's one that has four and the the the act there's exactly three actuators but if you look at each of the legs that they call these legs but you could think about them as fingers if you want they're they're physically connected to the platform and if that platform is going to move in six degrees of freedom that leg has to move in six degrees of freedom as well but there's only one actuator there so the way this is dealt with in parallel mechanisms is you have in this example five purely passive degrees of freedom in that leg with the one actuator so each leg has six degrees of freedom because it also has to move in six degrees of freedom but there's a lot of completely passive degrees of freedom there now if I think about this in terms of a hand with the platform being the object right I can have an exactly constraint hand but the issue is is that if I cut the connection to this platform it just falls over and you can't do anything with it so in terms of thinking about this in terms of a hand you have to deal with that aspect of things so what we wanted to do is is deal with sort of exactly constrained hands but how do you deal with that being able to sort of keep support its own weight and under gravity and the solution is is under actuated mechanisms which basically allow you to exert across all the degrees of freedom of the mechanism without adding constraints or only adding minimal constraints so what we want to think about is designing hands where you have one degree of freedom sorry one degree of manipulation for every one hand actuator for the entire hands or about that so the first work we did on this was from some postdocs from the parallel mechanisms community just looking at kinematic architectures of hands this is the platform which is the object and these are three fingers with rotary joints here and then another rotational joint that's not easily represented and this is just some examples of workspaces I'll show that architecture if you move the object in X&Y here's what it would look like and then rotate the object so these are kinds of hands that we were we're going for in this work I won't tell you much about it other than what I think is a really nifty thing that we just published fairly recently is what we call the Stewart hand and it's basically that same Stewart platform where the object is the platform and we can grasp it and then manipulate it in six degrees of freedom with that hand has seven actuators so there's one linear extension actuator per finger that makes it move like a standard Stewart platform and then there's an additional actuator that brings the fingers that closes the hand basically and that routes differentially you know a single actuator to the the sort of three fingers of the hands and keeps a grasp force on the object and then it passively reconfigures as the object is manipulated through the workspace so I'm fairly certain you know it's always risky to say this kind of thing but if you want to think about dexterity in terms of range of motion of a manipulated object this is I think the most dexterous hand that's ever been built you know we can grasp a wide range of objects and manipulate them all in six degrees of freedom similar to what a stewart platform can do so seth alluded to it you can download and Bill most of our hands for free through the Yale open hand project this really came about because if you're working if your research is developing novel hardware how do you get other people to benefit from your research typically it would only be if a company if you start a company and sell them but that takes a long time there's not exactly a hot market for robot hands out there and so we wanted to be able to disseminate and distribute our hands without having to do that we had already been doing a lot with 3d printing but one of the challenges here is for instance in the robotics community if I want a computer science robotics group to use one of my hands they're not going to have the know-how or interest to go into a machine shop and do a bunch of milling to make a hand that it just won't happen but with the advent of you know relatively high quality 3d printing being almost at every university you can do a lot I want to get into details of it but these are either off-the-shelf components or 3d printed parts with some additional after-the-fact steps we do some molding here as well but you know I think well over a hundred hands have been built from our open hand project and then related to that we just have another website that we call open robot hardware it's mostly linked to other people's open hardware efforts okay I'm gonna end mostly with some ongoing work now that we have some decent hand designs now what so we're starting to do more traditional robotic stuff in terms of thinking about planning learning control so the benefit from my perspective is that these hands and their passive adaptability really facilitate a lot of tasks that are very difficult to do in a traditional sense as I've been talking about but basically the passive reconfiguration allows you to do things with very little information here's one effort where we have you know a similar two fingered hands and we're manipulating this object in two degrees of freedom and we're doing it with no model of the hand object system we have a signed binary Jacobian if you want to even consider it that that basically says if I turn the actuator one way the fingertips are going to move in one way and we just wrap a controller around that so we have a camera up top that is just tracking the center of that object so you can see the black dot there and everything I'm showing you here is just using a very traditional visual surveilling approach to try to drive the center of that object to the target location as it's moving we started to look at some fancier controls model click model predictive and adaptive control to make it better and it does make it better but even without fancy controls were able to actually drive the objects to those target locations with essentially no information about it and the reason it works is because it just you know the mechanism passively reconfigures internally to make up for you know the imprecise information that you might be giving it in the context of trying to control it to the to the target locations we're able to do a little bit of planning with respect to that so this is work in collaboration with Costas Becker Asst at Rutgers we're also working with abdeslam Belarus there to do some of the learning stuff we're doing this is accepted for is er which I guess is in a month or so and is this playing ok they're just illustrating some things I mean jump to I'll talk about it in a minute but we're actually using so we've learned some operating modes of the hand in particular we want to know which regions in the actuator space would result in dropping the object or hitting a singularity or so forth and we're planning around those sort of risky regions to operate only in the sort of safe modes but I'm using it mostly as an example just to show doing some some more functional tasks and this is for exit you didn't notice so we're writing is ER this is an interesting idea I think I need to write a proposal on this it's essentially a lot of challenging within him manipulation tasks necessitate sliding and sliding because of friction and lack of knowledge of it or lack of being able to control around it especially in soft contacts can be really difficult to execute so a lot of what we were thinking about is using this idea of caging where you have sort of a safe cage around the object so that if you if things slip or things go wrong you don't drop it a lot of times caging is considered outside of the context of contacts it's just sort of having bars around the object we're trying to think about caging within within contacts and basically once you have a grasp on the object like in that image above you can actually because there's additional degrees of freedom in that kinematic chain you can squeeze and do some things with that object and manipulate it for instance from fingertip grasp into a palmar grasp which would necessitate sliding along the the finger pads and we started to think about let me play this representing that hand object system in an energy space so there's a certain amount like in a given configuration you'd have to exert a certain amount of work or force onto the object in order to get it to move that is it's going to fight against you and you can actually map out this of energy space of the hand object system and look at the low energy positions and you know not not surprisingly the object will gravitate towards those low energy regions in the hand object energy space and it turns out that that's really robust no matter what the position starting position of the object is so we can grasp the object wherever it lies and then servo our actuators to a certain target location which produces an energy map like that and essentially without fail the object is going to move towards the low energy configuration in that energy space and then you can imagine doing some planning around that so effectively you know offline mapping you know sampling this energy space as a function of the actuator inputs and deciding where you want to move the object and looking up what the you know nearest actuator commands are to have the low energy space at the target location and then server the object to so this is stuff that we we submitted to Accra so it's still still in review we are using a hand that has sort of low friction surfaces here it you know makes sliding easier but one thing I like about this approach is that you know maybe the object won't slide because it's sort of stuck but at the very least you can drive it to once a slide in that direction and when it does slide it will move in that general direction and so at the very least allows you to start to move the object in the desired direction and you can wrap some controllers around that or something in order to do more robust planned manipulation within the grasp okay and then it's not a robotics talk these days unless you sprinkle some learning into it so I'll do that from my perspective the robustus to uncertainty that I've talked about is a major win in terms of data driven approaches so you may have seen the Google arm farm right I don't even know how many how many arms they had working you know day and night for how many months it was a lot but you know you're probably familiar enough to know that a lot of these data-driven approach especially in manipulation you're basically working you know this is a complete exaggeration right but just for illustration ninety-nine point nine percent failure rate for these blind data-driven and manipulation approaches so you need tons of data to even get useful useful data for us let's say we can do 99.9 percent success rate right we we can do things a lot more easily with a lot less money less arms so we can collect data but we can also do things online and in real time because we've already shown how robust things are to uncertainty we can manipulate without knowing a precise model about things and gather data as we're manipulating and update our models and improve things in real time I'm gonna just skip this one for time but this is the mode recognition example that I talked about earlier where we basically did a we we generate a training set where we manipulated the object in this visual surveilling approach and had a number of features that we were tracking from the vision system as well as a few at the actuator side and labeling by hands these modes of the objects so this here is sort of a what we call the normal mode where the object is moving stabili we don't we're not at risk of dropping it and then as we're changing the commanded direction we start to sense things because of the training set where if we keep moving in that direction we're going to drop the object and so forth so this is actually showing the online prediction of the mode that we're in with the object given the desired trajectory shown by the red arrow so here we're hitting a singularity and so we can sort of operate in this sweet spot of normal and do all of the within him manipulation work within that sweet spot and that's what the video I showed earlier of us writing the is ER is sort of planning around these risky regions and then the newest stuff which is also under review for okra so it turned out that that project which was published at this past okra we just used a random set of visual features as well as actuator features and looked at what correlated best to the output and just you know selected the best ten or something like that from that set of forty some features but those weren't necessarily very meaningful from the physical system side of things they were just the ones that happen to correlate best to the to the application that we were doing here and it turned out maybe unsurprisingly that when we ported this over to a new hand it didn't work very well so we started to think about well what are the the features that we could identify in the vision space that would make the most sense in terms of the physics of what's going on such that we might transfer them to a novel hand system and essentially what we ended up coming up with and we're still thinking about this is terms that speak to the local contact geometry between the finger and the object right so agnostic of the whole hand configuration what's sort of the local curvature at the finger side and the object side that results in these different behaviors and that would transfer relatively well to different types of hands and then we looked at terms that spoke more to the stability of the object and and manipulative the hand object system and with those with those terms we ended up being much more effective I think we were something like 86 percent success rate across so these were the the test hands so we essentially varied we don't show it here we varied fingertip geometry link length which you can't easily see here but you can see we did like short fingers short links and so forth and started to yeah just started to investigate this general space of what kinds of features can you learn that are that are transferable and across these tested systems we were something like 86 percent success rate in terms of doing that same online prediction of whether we're going to to drop the object or have it get stuck and so forth whereas the the features that we used the sort of dumb features that we use previously we had something like a thirty percent success rate for this so these were a much more effective set of features to to learn from all right I'm gonna skip that Seth already mentioned ycb we're doing work on benchmarking which is very important in helping evaluate different approaches in robot manipulation so if you're interested in manipulation and benchmarking check out this project and with the prosthetic stuff this is still very ongoing you can imagine how the stuff I'm talking about applies to prosthetics you want something that is simple lightweight durable it turns out that amputees are essentially doing open loop control of their prosthesis they execute some grasp commands either through my electric signal or through a cable that they pull and you'd really benefit from a hand that's adaptive and can do all these same sorts of things so we've done some work in trying to design hands that incorporate some difference so we're doing it for morphic hands here which we haven't really done before I don't think it's that important personally for robotics but for prosthetics you can imagine that in anthropomorphic look is has a lot of benefits so looking at what are the important grass types that implement and how to implement them in hardware and then we're also doing a lot of work on on wrists which I won't get into but it turns out that being able to orient your end effector is a very important functionality other people have thought of this as well but if you're going to have let's say four degrees of freedom in your hand arm system what how is it best to to spread that out so is one open closed actuator with three degrees of freedom of a wrist is that better than one degree of freedom of wrists and three actuators towards the hand it's not a simple question but at the very least just investigating what's the role of the wrists in doing a lot of grasping at manipulation tasks and can we design effective prosthetic wrists you know that are lightweight and durable and so forth okay so all in there these are the people from the lab alum and current students who did all the work that I presented my funding acknowledgments and I'll take questions so we've been doing you know I mention at the beginning we've been doing a lot of work looking at human hand function and it's a really complicated question I mean separating out function from form especially when a lot of the objects that you're interacting with we're designed for the hand form there's not really a right answer we've tried to whittle down things as much as possible in terms of thinking about you know the functionality of grasp types so there's been taxonomy zhan on human grasp types but you can boil this down to effectively well what we were looking at them in terms of what you can do with that grass type and then thinking about implementing similar grass types that do that same thing and trying to span that that span but yeah it's really difficult to decouple to decouple all of that especially when you know the human hand is 21 plus degrees of freedom you can do a lot with the human hand but you can also do a lot with one degree of freedom gripper as well so where's the sweet spot that's not an easy answer we're not thinking about dynamics at all if that if that's what you mean I mean we're trying to design for that transition that's happening there but everything we're doing after the fact is all done sort of blind there is no control you know so the dynamics that exist I mean granted hands are kind of an easier problem right because you have low inertia things you're moving pretty slowly so I think in manipulation we have an easier problem than you would in like legged locomotion where dynamics can really come into play especially when you're dealing with balance and stuff on on top of it so we've been in a comfortable position of being able to ignore it but I think a lot of times you can't ignore it for the most part in in grasping and manipulation probably yeah we tend so we write a lot of really boring MATLAB simulations and most sort of the optimizations that we're doing is writing a repartee you know simulating a representation of the usually the grasping aspect of the hands and just varying all the mechanical parameters that we can vary in software we tend to just do sort of a scorched-earth you know complete sampling of all the different spaces as much as you know we non-dimensionalized and that sort of thing but we're usually looking at you know simple representations of objects so you know often just circles of different sizes or circles and squares at different sizes you know there's so many parameters that you can't completely optimize everything but we try to try to capture I think size is one of the most important things concave objects you know probably aren't that important to look at but the the passive reconfiguration can make up for a lot of a lot of those things so we're optimizing in the in a very informal sense of the term mmm-hmm by breaking contact you would mean like physically like gating or regrouping or something like that right well if you break contact while holding the object you have to have a lot of redundant degrees of freedom to enable a stable grasp while releasing the contact and most of our hands don't have that amount of control yet so and I think you know you you can do a lot with breaking contact Andry grasping and stuff like that humans don't do it that often or probably probably don't do it if they don't have to do it I don't know but especially when you have simple hands you don't often have the degrees of freedom in order to make that happen re grasping where you set an object down and pick it up from a different orientation you certainly can do but we're trying to think about yeah so I don't know if that answers anything no we we use the single training set with a the the original hand configuration and the the different hands were the test scenarios so we're not actually considering we're considering the hand kinematics at all in the in the training set we were just testing on hands at different kinematics right so I've never gotten I I don't know where sensing is strictly needed in grasping and manipulation certain you need certainly need sensing to perceive the environment and enough information about the environment and the object to execute a grasp for instance but when it comes to sensing for the sake of of maintaining a grasp you know you can do a lot without any sensing I think humans use sensing you know we get some tactile perception of the object I have shown I think somewhat convincingly that a lot of what we use our tackle sensing for is to just modulae around slip so obviously it's efficient to exert as low of a grasp force as necessary to still maintain a stable contact on the object and you tend to increase that grasp force as you sense it slipping and Johannsen the you know sweetest neurobiologists or whatever you want to call them you know they basically showed that people tend to exert 10% higher grass forces on objects than what they really need to do to keep it from slipping as like a safety buffer I think that would be useful to make things more efficient but most objects that you're grasping you can squeeze pretty hard and they won't break but obviously there's gonna be frack like this bottle is actually a great example of something that's very fragile that you know has a relatively small range so I don't know for most objects you know I don't think there's a lot of real strong need for sensing you can improve certain things but right yeah so Johansen they did these so there's some nice videos you can find online of them anesthetized and the finger pads and having people strike a match and people are actually we do use it but we're actually able to do it without it it just takes like 20 minutes of learning or something like that so you know granted there's visual feedback and probably auditory feeb I mean who knows the the sensing that's actually involved but in terms of the actual tactile sensing you know we can we can get by without it as well so in almost all the examples that we have we just set a current threshold and drive the actuator to stall so it just rests sort of at that current level it might not be the objects that we've grasped you know we don't put much thought into what that level is so yeah if it was important you could probably change that and maybe have some perception that informed that you can get a lot of the same passive type of behavior by doing torque control on a low-impedance actuator or something like that but why if you can do it in hardware without that I mean there may be good reasons I don't know but there's also a good reason that you can't really do pure torque control because most useful actuators have to have a gear head on it that makes it difficult to do though so