So I'm very excited to introduce my friend slash colleague Professor Tricia and Kim, who is an associate professor at UIUC. He received his PhD in national university. And then he worked in working with Samsung Advanced Institute of Technology to develop some awesome hidden secret humanoid technology. And then he just quit a job just to participating in the darpa Robotics Challenge heat. And at that time he worked with Carnegie Mellon University. Then he sat down with digital research to create a very awesome, interesting, joyful robots. I met him when actually it is in research. And after that, he moved to academia. He's now, as I mentioned, the associate professor at UIUC. Associate Professor at UIUC. Today, I believe he will show us super interesting robots, a lot of them that just one. So I will hand this over to Professor Chan Kim here. Hello everyone. I'm shall I actually have slides for all this introduction. So I'll just start. My talk title is for this kind of general tog, always, I'm using the same title towards human friendly robots, but the contents are always different. I will talk about most resent reserved in this talk. Yeah, this is me and I just want to have this one to show you. There's a YouTube channel for my lab. I'm, I'm nodding cool videos in this channel. So my research goal, the life goal actually is make better human-like robots. And because it's not just a humanoid robot, humanoid shape, human shape can go where people go, and also can do what people do. This is not my thing. This is actually Jedi Pratt. The HMC is researchers. Now he's with other company. He said this one, I like this a lot. But what I think is through this robotics, through humanoid robot that we can understand the human and the robotics itself. We can understand the nature better. And actual goal is we want to help people in daily life using this robotic technology. So from nine is I started working on this bipedal locomotion for a long time. And at that time we didn't have any commercialized like the robots. So I started, these are all your time. Very slow, but still, this one works pretty well. So from very simple designed to this, this one is actually very similar to asthma or some other robots using harmonic drive and pulley system to have to have six degrees of freedom per lakh. So I've been working on this for a long time in grad school. And in Samsung Electronics. I'm pretty sure no one actually have seen this before. This is robot from 2008, 2012 ish, Samsung Electronics develop this video. This robot, including many other manipulation and perceptual net, everything. And the interesting part of this robot, this one is not using the bent knee. All the robots are using actually bent knee. This one is using straightening, walking. So it's pretty different approach at that time. There are some papers about this one, but this one is kind of gone project now. In 2012 in Samsung Electronics, I really wanted to participate in the darpa Robotics Challenge with this robot, but that was not happen. So I joined Carnegie Mellon University to work on darpa Robotics Challenge and in, during diaper texture Linji, this online hierarchical approach to make the bipedal locomotion is kind of completed by many research groups, not just by the same new group. So after this, we I was in Christakis and Tim WPI, CMU team. We actually codon one, codon when the game, but the, our robot is the only robot didn't use intervention. So we think the controller was pretty good. And after, during this TRC, I had a chance to join Disney. Disney, I spent six years to develop some robotics technology. To make some features for the animation characters. So there are many, many features in specific animation character who can help, trigger, can hop, and there are lots sober and electronics in Disney. We can use retargeting methods to make natural motion for the animation character and Medtronic's. So I spent some time to develop 3D printed software skins for this kind of character or develop some other new type of actuator. This one is named as lip or using voice coil and the parallel elastic component that springs to make this a hopping motion. And also worked on the retargeting, retargeting, retargeting problem is really interesting because the, if the source data source or animation or motion capture data, we want to use this data to different configuration. We need to solve some dimension reduction problem for a specific robots. And the scale problem that I think graphics people already explored this problem a lot, but the robot has its own constraints, like speedier or collision kind of problem. So this is still very interesting topic. And then 2020, right before COVID Adrian, Yeah, you see, to build my own lab and to work on some interesting robots. Lab is named as kidnap. It's not just my last name is kinetic intelligent machine, which is robot. So the robot in lab. So as you saw, what I'm doing is I'm building robots from scratch and making controller for that and doing some interaction. I said, I think it's including most area of the Robotics field. But I can say what I'm doing is a test database, robot design and control. So if there's any target task like locomotion, bipedal locomotion or interaction hugging. We collected data from forward that task and the design specific mechanism or some other new material using some other new material to do that task and running some user study or experiment to collect the data and improve the design and control again and again. In my research life, I've been working on many robots. And these yellow boxed ones are the robots are built from scratch. And the red ones are the robust legged robots. So I can say, I've been working on this legged robot for a long time. So I will start from the pipette robot first. So pipette robot is very simple. If a robot has two legs, we can call that one bipedal robot. And the history of this bipedal robot, humanoid robot, is around the 50 years ago, started from this whiteboard from Washington University. And all the robots up barriers are from Japan and in us, the MacRae, but the Boston Dynamics founder started this one and I go hopper and there are some series of the legged robots from the Leg Lab. In '90s. From 2000 when Honda announce this esmolol lasso robots or have been developed until now. We Central, what's our Boston Dynamics atlas or Tesla, Optimus and Charmaz, cyber one. Those are bipedal robots. And these are some examples of actually the robots a lot. And all these robots have this hip, knee, and ankle joint. And there are some other structure, some other mechanism, but these robots are using revolute joints. And there are segment linkages, pelvis, thigh and calf and the foot links. And these links are all rigid links and robot. This robot is standalone. Accommodate with robots, so it's a floating body. This is, these are very obvious fact for the to describe this bipedal robot, but these are really important assumption. We need to have two more Dell, dell robot and run the simulation. Then we also need to know what start walking, walking. These are from the dictionary. You can have this simple definition of walking. Move at a regular or fairly slow paced by lifting and setting on each foot in ton, never having both feet off the ground, the other ones. So if there's no error, time is walking is way too simple. And there are lots of synonyms because of that. So when we have very simple definition, you might think it's easy to implement. It's actually harder because some people can think this one is working but, but, but actual walking definition of walking in everyone's mind is different because the definition is simple. But interestingly, while King has been studied in many different fields, and this one is from medical science. In medical science, normal walking was studied for a long time and it's very well and analyzed with older temporal and spatial information. And also based on the timing, the function on the T-cell, so well studied it. So researchers are working on this field, is using this kind of information a lot to design their own controller and how to make this walking, how to makeup iPad robot walk is the main research topic in this field. And now, as I said, after DR. securing DRC, this community, the kind of merge it towards one direction. And now we have a standard process for bipedal walking. Bipedal king is basically locomotion. So there's our starting point and the goal point. And we need some kind of planner first to have this cell for steps. So first step planning is the first step or to design the walking algorithm. So you cannot just have random files tab, there is a constraint to have this. The stem lengths or tiny angle based on your hardware or your model. You need to have that kind of information to actually have the first step. First. Then based on the first step, when you have first tab, you know, when the foot is on the ground. So based on that information, you can have target center of pressure on the ground at the right timing. So you can have this targets and center of pressure trajectory. You need to generate data based on the four-step planner. And based on that, the first step planning and COP generation is still underground. So it's not in 3D or is set in 3D actually, but it's not using the 3D model of the robot. So researchers in this field use a very simple linear inverted pendulum, or the spring loaded inverted pendulum, or this table model to use the swan mass, one mass or there are Martinez model but mostly one mass to find the trajectory COM, trajectory with this COP information on the ground. So actually solving the dynamics to satisfy this COP constraints are, and using that, you can have this purple in 3D space. So after that, now we have footstep and CRM movement in 3D. We can use all this information to get the joint level commands to make the robot walk. This one is not like feedback controller. This, this is more about how to generate the walking motion. And during DRC, this, uh, optimization based semester. So. Not using all the models analytic solution, we can use iteration to get closure solution for a good solution for each layer. And we could solve this problem pretty well. But the meaning of this processes, when you think about this dimension for the footstep, planning, for step is maximum is one for a one-step position and orientation. So max, like six-degree of freedom problem here. And after that, it's almost same depends on CRM trajectory is based on very simplified model. So the dimension in this space is pretty low. And using the model and all the information together, we can actually solve this high-dimensional problem. High-dimensional problem is meaning that it feels so this humanoid robot, there are like 30 motors. So if we want to solve the dynamics for 30 motor system, you will need the velocity and acceleration and probably you will need this some kind of fixed the coordinate in the system. So it can be 100 dimensional problem. But through this hierarchical approach, you can make it in very low dimensional problem and actually get this solution for the walking motion. So this is kind of standard process for bipedal walking. I wanted to show this on first to talk about the later topic. And I will just say introduced one robot, seven and I worked together before snap, but it's also legged robot. Snippet looks like a normal hexapod, but this one can actually lose the arm, the limbs like that, and can change its locomotion style based on the current configuration. So this is a very interesting design and there were multiple motivations for this. But I was Disney at the time. So the character's name. What is the feature for Olaf? So all love can survive after changes body shapes. So Olaf can lose body part and can do locomotion or whatever the movement based on the current configuration. So that was one good motivation in Disney. But the really interesting part to us at this, sorry, sorry about the creepy. So this is steady, long-neck, long leg is on spider. And this one is actually Poland losing its lag when there's a danger. And after losing, after changing this configuration is actually moving almost same speed as this is very interesting because it was born with n lags, but it can move or almost, in, almost same speed even with five legs. So this is really interesting part because in robotics research, if we have very small like failure in the system, we cannot use than robot at all. Fail-safe system is really important if you want to use this on your word and how to handle that kind of situation is very important. I will show you the snap borders developed in Disney. But there was, I wanted to add some other features in this snapshot. So developed another second version of a snap or in UIUC. So the mechanism is very simple. So now there are many commercialize the motors. You can use just the four wires or three wires to power the motor and communicate with the motor through the serial connection. So we're using that kind of sabo motor to control this two degree of freedom, lag and all the power, the spring loaded power kind of spring-loaded pogo pin was used for the connection. And there are multiple magnets for the hardware connection here. So coupling were done by that magnetic force. And the older batteries and the processors and the camera part are in the body side. So previous snap board could do this configuration recognition, which is really important in this system, using just a current, which was not the best method for the configuration recognition. So now we are using camera to check which leg is connected to each port. So this kind of motors have its own ID. So when you ping all the ID, you can tell how many legs are connected. But still it's ports are used in current configuration is you need to actually check that. So if you have a baby or if you send baby, babies are playing with their hands and the feet for longtime when they're really young. That's four to recognize their current config configuration is pretty good word that the hands and feet, sorry, there's where the snapshot is developed like that. And this is totally open source. Our explain this later. But for study on legged locomotion. And this is where we worked together in 2008, 17, 18. And when I saw this one, he was super interested in using just harder for the reinforcement learning. As far as I know, this is almost the first work just to using hardware for the reinforcement learning using one camera. And the key idea was actually initializing system to make the robot go back to the initial point and just using the camera to know the walking distance and the how, how straight or king was. So, so that was the part of the reward function for this. And this is a very interesting work. And I've been working on distraction with some other collaborators. Because after this, psi got lots of request from machine-learning field to share that design. So now I made this second version as open-source. And if you check the, this is the GitHub for this snippet, a second version. And you can also find this cool design. This one has a camera inside the, if you know the Toy Story, that old version, the first one, you can tell this one is from the Toystory. So check the time. So so far. Well, for almost 15 years I've been working on this legged locomotion, developed the system and implementing the controller, a better controller. And walking is really interesting. Behavior. One important and unique human behavior and safety critical behaviors. So that's why many researchers are working on this. But it's a well explored in various fields and there are plenty of data actually interesting part of walking. But what we wanna do is not just the walking. What do we want to do with dropout? As there are many more task Nan, many more behaviors we can do with robust system. And I want to, I wanted to tackle all these behaviors or task. But it's really hard to find some good general solution. And data for all this task is very limited. It's not enough. And I started thinking like robots will, can interact with the environment and objects and other agents. So we can just start collecting data in your word for various task. And that concludes onwards, we need more robots to solve this practical task and to collect real world data. So I kinda moved on to manipulation. Tesco at home. Robotics manipulation has actually started from the robotics. The, the, as the study I think. But the dexterous manipulation is still open problem. And industrial robots are doing all the task very well. In the constraint environment is very fast, very strong. But using that large robot at home is not easy. And how to use this manipulate, robotic manipulation. Tom is actually very difficult problem. Lots of researchers are currently working on that. And there are many systems automate this interesting robot and part robotics is this from here, I think. Hello robot. And this is really good solution because this one has small base like this. I like this, a low libido lot. Google's everyday robot. Samsung and other robots are. There are many robots now. Still the market is not there. I will talk about this robot vacuum cleaner a little bit because a Roomba robot vacuum cleaner is the only successful Robot application for home. Because affordable and said, now it's a much more expensive I think, but the cell affordable and can do the one important task, cleaning and users know what they can do. Now, that took some time, I think. But most important part is really well designed for human environment. About the usage. Everyone had some bad, like memory about the robot vacuum. But after some time, people understood how it works. So we can actually use this one as a tool. Because that history of vacuum cleaners really interesting because I thought that's 50 years old or 60 years old, not that old, but it started from Something 18, 69. This mechanism is really interesting because if you appreciate this one, is actually moving this pump to clean the carpet. And this is the first mechanism as the vacuum system. And this one is also hand-held vacuum system like this. But the important design for this vacuum machine, that most of this vacuum machine started using this wide thin section part like this. And over past hundred years, the human environment actually hasn't been changed it. So when we purchase some kind of sofa or bad, we are now considering if we can use vacuum machine underneath this furniture so that spin, change it together with the vacuum cleaners. That's a really interesting part to me. And like all the companies, I showed companion robots, I showed the target environment for me is also this home kitchen. Kitchen, the structure 1,980.19, 40, 60. Those are like the structure. The design wise is almost similar. After the 1960s, only one home appliance was added next to some error here, this dishwasher and this home kitchen is most structured and most standardized space at home, and also relatively easy to modify. So the goal is simply dishwashing is simple manipulation where it says still very difficult problem. And many researchers wanted to solve. Idea was simple. I want it to have some error here to do this. And I don't have religion to be clear, but this one is a quinine, one of the Buddha and I, there's a guanine was interested, interesting to me because. Using thousands arms to help people in, in, uh, the reason why it, she has the thousand times is to help people. And I kind of inspired by this and made this kind of system so like the step. But if he can plug the robotic arm to some error. If the robot can do some tests in the past to location for that task. I think we can use this by modifying environments slightly as just a ten centimeter get there. So the current problem of this manipulation as the manipulator arm is way too expensive, even for the research use, are easily over $20,000. And the older companies are working on this mobile manipulation because we need like multiple arms if we need to use that one as a fixed feature. But the locations at home, the kitchen or table, the hours and the frequency of the utilization is actually pretty short. So so the design goal was making this pluggable home appliance. So it's not about developing. The manipulator is more about making some kind of standard, like a USB port. We can power the robotic arm and communicate this with this system to do some task in proper location. That was kind of concept. Further design. And current version has six degrees of freedom and the gripper and one RGB-D. Intel. Your sense camera is I touch it. The important part is really lightweight, under six kilogram and the payload is still about 3 kg. So the mechanism or the inside mechanism is actually similar to the snap both. That's why I wanted to show the Snapple first. So using our 20 pins for the power and the communication, motor communication and the USB communication. And it can be carried simply like this. There's some small design features we can need to cover the important part and there's a locking mechanism for the portable probability. And use it to plug like this. Of course, you need to change this environment slightly because we need to install this part. And there's a power and one computer for the motion controller. But still just a four-inch can centimeter gap somewhere where we want to use disarm. So I think that a small amount of modification is okay. For this. We also check the workspace to check the payload. So this one is holding five-pound dumbbell to test the payload. And currently the software stacks are all based on rows. So we can use all the rows packages for the perception and then the manipulation planning every layer which make the, make the more application in very short time. So this is, we submitted this paper and is currently in archives. So if you are interested in the structure, you can check that paper. And as I said, the interface with other devices as the controller input, you can simply use the server ER or just some other skeleton estimation functions to, if you can get. I'm joined today by command from somewhere else. You can easily just to use that interface to control the arm. Well, this is low-level country, so this is mostly basic rose function. So I will skip this part. I will just show you what's the current applications in my lab. So this is papyrus table. We have three port in this table. It simple coffee dripping them all. So how to make this kind of motion or how to have this solution is my current interests. I believe there are now many solutions from language to robot or language it to some action. I am expecting to use that kind of method a lot. And this is the simple form. We have many arms, so we are using Ross, so distributed control is simple. So if we want to do the synchronous, nice, it's the not actually fully synchronized, but we can just grab that topic to control the robot. And making mobile robot is also very simple using this form. So all starts says double-headed dark and we are using spot and there is separate bacteria and one computer to control this DRM system. Making this spot as Durham mobile manipulators simple. The good part is we can actually use our spouse or other robots. Robot your base robots or some other functions. Just stay in our system, not having many other sensors in the upper body side. We can just use this API to use this, the oldest sensory information. Actually, this one is doing everything autonomously with some script. But what we're doing is just so from some, from some point, using a pretest, which is the function of the spot. We're just using that function for our mobile manipulation. What we're doing is just two upper body manipulation side. So using some other devices are very simple. And this is slightly weird application, but I can explain this. So I wanted to make, frankly i 0 need to make the IMS setup like system. But this is not for just the movie, is if you think, if we will use the wearable robots a lot in the future in manufacturing environment. Upper body for, especially for upper body. Wherever robot, someone always need to help the user to where the system, how to make that on as autonomous system will be interesting problem isn't a good excuse. So I wanted to make this system and actually made this up. Okay. So this one was last year at Christmas video. So as you can see, if there's a target wearable device, such as a kid. Putting it on is easy because the subject can do this. This part is hard because because we need to actually control the robot for the safer interaction. So currently it's just, uh, using mock-up. But interactive. This syrup and synapses term will be very interesting problem. In my point of view. Well, well. Welcome to Math. And so how to create this application is very simple. So sassy, you know, UH, the engineering of pronounced in the UIUC, the event, big event. So, UH, is a big event in UIUC and I think two weeks before, UH, we wanted to have live demo with some robot. So we brought this, but also wanted to show some cool stuff together. So we got our t table from ikea and spent two days to make this one working in simulation. And spend three days to attach Chess engine, which was not developed in my lab. So we actually made this robot arm has to play and do this task in front of this audience. So built everything in a week or two to live demo for two days. So if you have 3D model, you can easily just modify the 3D CAD. And the planning control group is all based on the, the robots and we already have all these things. So the attachment point, though, location and orientation is important. But once you set all these things, you can easily do any test, not any task, but you can control the eye. What I want to talk about this, Hi, I'm trying to introduce simple concept. This is my current research topic. There's a task retargeting. What I wanna do is given multimodal data for specific target task, we want to find what is the optimal configuration, robot configuration for that test. And we want to solve that task with this. So e.g. coffee dripping, use. So the code is simply High-level by sequential moving something somewhere or doing some, some motion for the coffee dripping. And the important date information and stuck configuration. We need to design that part for that task. For this task actually. And all the sequential information for that task is actually already given some error. So how to grab all this information to make this, this sequence is important and how to solve how to have this pipeline using this system is important. In this direction, I don t have solution. This is my current research, opaque. But the challenges are, I think these are challenges are overcome. Configuration that workspace is really important. Skills for the task skills is more related to reactive control sides. So the hardware sensing and its interaction between object tender robot or even two and the robot. How to make that skills. How to make that interaction with the object is important than the experiment is important. The first one can be solved with the weird part process or some other similar system. I I'm not sure. I'm not here to sell my system. What I wanna do is I want to share this idea to use this kind of system, system with even different arms. So the arms are kind of bolted down on top of your lap. And there we cannot move that on from there to some other place in a day. So what I wanna do is just if we have, if we know what's the task? The dishwashing, Tesco's moving lots of water in the sink for up from sync to the dishwasher. So we can have that kind of data from some egocentric like vision data or just so you can collect the data where you can heuristic approach to figure out where's the optima installation point for that test to solve that task? This is the table environment there. So this space is the workspace overlap between these three arms. Or if you have target wearable device, you can actually. Optimize the number of arms and location and orientation and volumes by just analyzing the process for the syrup. And yeah, you can solve many, many task at home by just analyzing those kinds of data and skills for the task. I had this one in the walking slide. When I explain what's the walking. Walking is, as I said before, really well studied from different fields. And we have, these are activated muscles and we have all these kinds of data from some error. How to collect this, how to solve, how to know the skills for walking. It's, I think currently good solution is data-driven way. We need more data for the Fiona sold some kind of task. And you're worried experiments. Obviously, you just study. So this is one scenario using all the arms for a different test. We can attach arm to the word, or this is just a Roomba. So we can attach one arm to run bar to make it as a base. And this experiment is just a one lunch or fully autonomous. While the robot is cleaning the table and doing some dishwasher. There. Three times. Faster video, but still slow. But I think we don't. It's okay to take some time because when we are using vacuum machine that we don't think about the actual time duration for the cleaning. So while the robot is working hard, users can play with this robot. So what I'm showing is actually you're lonely life in the future. So it's all based on the perception. So you can move things from the mobile base to the kitchen. And I think there's I know there's a very simple, very simple one. You don't need to full synopsis them. You can say set actually very important during COVID, we all found that some people who actually need very simple physical help our couldn't help a lot. So if we can make this kind of system actually used in home environment, we can help pass up people. So yeah, this is the task retargeting. This light blue ones are the current topics I want to solve it first as a, much more, a home labor related works. Because what I'm doing is configuration free. And, uh, what I wanna do is solving this test coin a bit general way. I'm pretty sure we can use the same solution in the humanoid shape. My goal is still not change it. I want to do humanoid robot, but currently is way too expensive. And I think we can do, we can tackle this kind of problems with much simpler system first, to collect more data. And we can use that kind of solution in human-shaped robot later. So that's it for my talk. I can get questions. Yes. Yes. I do want to see this. Now. Yeah, we made that actually fly in my lab. I will show you that later. That's really good question. I also thought about, so if they wanna do so, dishwasher, kinda robot, robot, this home appliance is for that task. And it's a much more efficient than having like six-degree of freedom. Right? But space wise, if you think about this space, the dishwasher is using one slot in the kitchen, right? So if we can have really, really large house than if he can have all the home appliance. Some error, maybe that's better. Like company or hotel as having like cooking robot underground. It's fine. But if you wanna do that at home or some other space with this space constraints, maybe a bit dexterous robots which can do multiple task is better. I think you, today, when you think about bipeds and planning the motion of the body. Let's step planning center of mass inverse kinematics. Different. That depends. So if e.g. the Samsung robot didn't have the self step planner. So more reactive control or slightly pre-planned motion. I can not say that's pre-planned motion, but it's like making the robot go into inside the limit cycle is passive dynamic kind of approach, right? But if you want to make the robot goal, ops stairs, or there are some task we need accurate, like footstep planning. And in most case, for the competition or a challenge, we need that part. So if we want to do this for the totally rough terrain, like space or some error, I might just use energy efficient way, even including the computation side. But for now, if what we're seeing is better, stable the T, right? So I think a planning ways currently, the best solution. Shoulders. Stay like this robot. Where human, wherever this mall arm, more, they are different configuration. You can either go to like that even beyond human, right? That's true. It doesn't matter if any, Right? So, yeah, that's also a really good question. If we want to use the robot so humanlike as the mechanism is a human-like is important to human shape is important. But if we want to use the, this kind of different shape or configuration 1s, we don't know. Users usually don't know how this one works, right? So how to solve this inverse problem? To do a simple task, we sometimes cannot predict the motion because the motion is usually using their own objective function to reduce the energy or something, right? So how to make the motion human-like for the users? To predict the motion for the users safety. So humanlike is likeness is a four. Now, with this system, I'm more focusing on the user point of view, how to make the motion natural, or how to provide this more like personal habit kind of task. So you when you have when you are using the dishwasher, Everyone is actually printing their dishes in different way. So how to convey that kind of like human likeness or it, That's my current interests. And the one you said was, there's no mobility here, right? Well, there's a small mobility here. But human actually can move this one around. So what I'm saying is, if we use this one and if, let's say one arm is like $20,000, then very expensive here, but we can just use one num in turn And the human can just move around disarm. Or we can share the sum between households, right? So what I'm saying is, okay, let's make this service or a robot available in many, how sores than we can find something. And after that, if if Tesla or some other company make sure you all know is a weird $20,000 than that. We can use that solution in different form. That's kind of my research direction now, could you say again, are the number of users or too small? That's, I think that's the current problem. Motors are expensive. So frankly, that $20,000 per arm is actually in my lab. If we need to develop these one-by-one, it's a $20,000. So how to reduce the cost as only solution is mass production. I think that's not like what we can do in academia, right? So what I wanna do is I want to show, share this kind of idea and that this is simply about plugging. So if we can have this one, make this one as standard to use other arms, other commercialized dams. And then I think users will, the number of users will increase and then the cost will go down.