Let me introduce our prestigious speaker, Professor Hen Park from his. Today, he will give us a talk with the title of the hardware Design and Control algorithms for Asia and First Le Robots. As Tyler suggested, Hen has made numerous impactful work in L Robot bias probotics, control algorithms, learning, and so on. And he's also a recipient of the NSF Corrier Award, as well as the RSS Early Corrier Award. So please welcome our today's speaker, Hen, and he is all yours. Hello. Can you hear me? Yes, we hear you. Okay, thank you for nice introduction and thank you for inviting me here to give. Okay. Do you have a chance to talk with talk with Tujia Tech? So it's actually my start time to visit Atlanta. I visited here for during my poster and then today Weador is actually very unusual to me. I mean, I think it's also unusual to you. But yeah, I I actually uh little bit upset that I miss chance to keep the seminar in person. But anyway, so let me start. So today, my topic is how do you design and control algorithm for agile and psi robots. So I'm basically ego researcher. So I do control design and then how do I design for leg robot. Still, great examples of the robot system is actually biological animal, especially as you can see in this video. Humans have extreme balance capability to climb the wall rock climbing without utilizing his hands. And then this dynamic balance can be achieved by this funny looking very fast kicking performed by Ukrainian dense popomo. This agile and then very dynamic capability provides a great balance and then very agile performance in biological animal system, including human and then some quadruped animals. But this video shows another level of accelerating mobility and then this is a squirrel. We assume that this animal does not have a very intelligent capability, but this animal can quickly jump over gaps and then obstacles and then while utilizing its full body capability and then physical limitation, something like that. You can climb the lobe and then utilizing as you can see here, some ladders. And even you can wide on some elevator mechanisms to navigate through very complex environment. So here we can see that very intelligent behavior. Can combine many locomotion skills to come up with navigation on very complex terrains. In our lab, we are trying to reproduce the capability in god robot system. Of course, we do how do you design and then manipulate implementation and then so we manufacture our own hardware. Then we implemented our control design in our robot. This video shows how robot Honda actually navigate through climb climb of the hilly Mountain nearby the Christ Campus. So in this talk, I will show two robots. The first two robot is stand, and then this robot is designed for fast running. Then to do that, we designed our own custom motors, and then we installed that motor to our robot. Then second robot I will try to show is a marble robot, then this robot is designed to climb the particle surface very quickly. Then this robot equips with novel magnetic food which can provide the audation for. Then to implement that, we utilize the electropermant magnet. Then unlike the electric magnet, this magnetic device does not need electric energy to keep the magnetic force on. So we only need magnetic energy. We only need electric energy when switching between of state. You can save some energy by utilizing this electropermant magnet, this switching can be done very fast in a very fast manner. And then for this novel hardware, we design our own control golism. So we utilize two frameworks, two approaches. The first one is body based control golism. Specifically, we utilize the body printing control. And then we do some study on the body itself and then optimization embolism itself. Second approach is actually learning based abolism. So we employ the reinforcement learning albolism as well as we are now studying about some hybrided type bolism which can combine product optimization and reinforcement learning bolism which is ongoing study now. To design the control organism, we have to come up with a control algorithm which can respect how characteristic as well as the limitations. Then so how do you like magnetic food has some unique function which did not exist. The previous hardwares. So we need to come up with a simulation algorithm in order to reflect this new hardware and then how do you charter extend limitation. So in this way, we can actually fully utilize hardware capability over existing system, okay? So sometimes we have hardware specific simulation model. So we add this kind of simulation model to existing software for dynamic simulation. So four Egoien per actuator dynamics can be implemented. Okay. So actuator is not just a pure to source. It's actually it has its own dynamics. Electric motor is its own dynamics. So we add that in the simulation model. As well as double hardware model. We mathematically model double hardware and then add it in the simulator. Okay? So to come up with this particular climbing in the simulation model, okay. And then design control algorithm requires Bd or control design. So in the design of control algorithm, we have to consider we have to respect this specific simulation model. For example, your function has to be carefully designed so that it can reflect how specifics, as well as constraint in your reinforced multi learning algorithm, or boda based control, okay? Boder based control design. So today I will talk about how to implement this or how to come up with a design, how do you specific simulation model and then boder or control design. Okay. And over course, as a experimental roboticist we encounter a lot of momentum catastrophic failure, and then we can learn something from this experience. For example, this is mab robot for my PhD, and then we broke Mabel's foot, Weber's ankle, don't worry, it takes about only 30 minutes to fix this ladder. Then for tattoo robot, my poster has some amplifier explosion. And then some weird behavior that was not designed by me. Then this is t to bot fell because of the pad control. This failure provide us some lessons, but especially these two failures actually motivated us to design new control algorithms. We'll get back to these two failure later. First of all, let's talk about the audio design. So we are utilizing qua duct driv who are so called the proprestetutor, which was originally proposed by P pressor San Be Crowd AIT. And then we agree that this QDD attor was actually enable, technological enabler for the success of quadrupedal robot, as well as some human aid robot. Okay? The idea is actually very simple. We utilize high torque motor, as well as Rugiar ratio, reduction mechanism. In order to improve the transmission transparence, and then improve the vector ability. So here I'm showing you the properstive actual to design with Rugo reduction ratio. But how much KO reduction ratio we have to utilize. In this question, we utilize trajectory of humanity algoolism so in the previous messod, the design procedure is actually very iteratb and sequential. So here we select Q ratio, and then the selected Q ratio determine the operating region of the motor, with respect to pork and then Omega, angular velocity. We did this photo operating region, you solve constraint tractor accumulation algorithm and then obtain the trajectory. And then we check that this obtained trajectory satisfy your desired performance. And then if it doesn't satisfy the design performance, then we go back and then iteratively select the nuclear ratio and then do this again. Okay. So in our work, we combine all of this process in the one nolinearocen problem, and then we come up with a new none cogen problem. Here, we try to maximize jumping height while reducing the impactive force upon landing and then respecting limitation on actor token speed, then we formulated the unin programming problem. We do this performance specification. And then when running a programming algorithm, we obtained the ratio of 22.9 to one, which is actually a little bit higher than MI Theta case. Which was 5.8 to one. Then to realize this QO ratio, we have to design compound the planetary mechanism. Then this is the performance of the jumping height over the layer installed with the tuy to be designed. Then this layer is actually implemented in the bubble design, okay? Next design optimization we did is actually now we try to include RombobGyotis in our non problem. And then Dumbo otis is integer variable. So now the problem will become big integer linear program problem, right? So you have mboicyots of the lingear, sun gear, and planic gear. This combination will provide the Keo ratio, and then this eo ratio will affect the many things like bottle selection, swing retractory energy cost, something like that. The benefit of having this dumbbicGots in your octal algorithm is actually you can consider the hardware specific design or detailed design. So for example, mbobGots of the gear be rather than equal to some value to avoid undercut, and mboegyot hips and gear should be larger than what equal to this number in order to make the shaft to go through. The whole picture is in the hip gear, for example. So this kind of specific design consideration, can be included in opt problem if you consider Dumbo Giots, okay? And then finally, play assembly constraint so that you can assemble this umboyotGars dumbiotsT make the plant set, okay? And then if you solve this option viabilism, you can obtain the TO ratio as well as the Dombobekotis. And then this Dombois can be directly used to design the pltric your set. Okay? So we designed our pltric yourset and then motor module, and then we install that in the Kast hand robot, okay? So this is how we design the asi director drive tretor to select the Oratio. Okay? We utilize our pltric yourself. Since I don't have much time, I will just directly go to the control design. The first control method is a body based control design, specifically body pretty control. In body control, we try to solve this very small opt problem, and then it tells real functions, Ta transition model. Then if you solve this finite time open problem, then you can obtain the set of sequences of actions, and then you implement this on the first input, first action at T equals zero. After that, at the next simple time, you solve again this optimal problem and then obtain the sequence of actions and the only apply the first contraactions, and then go so on and so on. Then you can obtain Btback then this open problem will be solved within around ten millisecond in order to make real time. But the problem is the transitent model is actually very complex because the robot is consist of multiple sitive body. It's actually high degree of freedom model. So what we do is actually reduce body complexity. So we took out the contact dynes model and then we assume that the robot is actually just one sitive body and then this sited body is controlled by ground direction force, we did that, we actually can lead contact dynamics, okay? So because we don't have a contact dynamics, the contact sequence should be predefined. It cannot come from the optimal agent algorithm because optimal golim doesn't know the contact dynamics does not have a contact dynamics. So we normally predefine predefine the sequence of good cogient for throating gate, bounding gate, and then Calgon gate. And then we design the MPC algorithm. Then this method already has been utilized to come up with autonomous jumping over Mitiiato. Okay. So if you look at the robot, single lit of dynamics, then those two robots actually have the same mathemetical structure, Chitund and then bubble robot, they have exactly the same model, but different inertial parameters. And then different to contact constraint. Hound robot has to satisfy fun friction constraint. Bubble robot has to satisfy foot constraint imposed by magnetic. It's it bit different. So but they share the same dynamics, and then they can use the same optima algorithm. Okay? So here in single letter dynamics, if you are working on the three D single it dynamics, you have this manifold variable R, which lives on SO three manifold. And then normal amps ableism convert that into ole angles. Then Olo angle some problem, some issues, mathematical issues. So it doesn't provide a consistent distance between two orientations, and then it suffers from the Kembala problem. I suffers from the singularity, can you do this problem in just normal robot, it will not be problem because it works on the flatground. I does not go like porticle surface normal robot. But for marvel robot, it has to climb the porticle wall, then it will go through the poticle posture, which is actually singularity for oil angries. This experiment actually Failed because of the problem. There are many types of oil angles, and then some angles has a singularity when angle becomes 90 degree. I picked up wrong oil angle choice. So that resulted in this failure, okay? Now we formulate the MPC problem on Asos manifolder Then the octgen problem becomes optimization on As manifolder then we utilize reparameter trick in order to make this octen problem to oximogen on tacto space problem. To solve this problem efficiently, we need Jacobian analytical gradient and Hasian so we specifically use custom euthanasia ego lesion, okay? And then we actually adopted the presented in this paper published in Taro in 2016. Now, after this, it becomes quadratic programming problem, and then we utilize our own custom QP robo. Okay. And this is the result of running first. We do three meter per sec. And the robot can handle very wild kicking. And then we can transfer this control algorithm to climbing robot as well. The only thing we change is actually food contact constraint. So we derive the due contact constraint for magnetic foot so that it doesn't slide or peel off, okay? Then due context constraint is actually intersection of these two constraints. Okay? And then when implemented our algorithm to robot, we can obtain the pace gate with a speed of 0.7 meter per se. As well as inverted throating gait with the speed of 0.5 meter per se, and then the magnetic foot and then robot is powerful enough to handle two kilogram payload for particle climbing and three kilogram payload for inverted throat walking. Then if you utilize some trajectory opt algorithm, then you can come up with this trajectory which can traverse the gap. And then we can transition from particle world to embodied world. The robot was also tested on the outto experiment. This is actually water tank made of steel, but it's actually covered with very thick paint. Paint depth thickness is actually 0.3 millimeters. But the magnetic food is strong enough to provide odation force still. And this method can be also applied to state estimation because stative estimation is basically another optimal problem, then we proposed the invariant smoother for state estimation for a robot, and then you provided best result compared to state ibolism. Now let's go back to the second failure. This is the second failure. What happened to robot is actually stepped onto the non moving part of the treadmill. And then even with these ericno cases, the robot does not change the kit to recover from the cono cases because it does not have that ability because we predefined contact sequence pH design, right? So now we want to include the contact model in our NPH problem. So here now we add contact impulse to our transition model. And then this contact in pulse is obtained by solving this optimization problem. This opinion problem has quality cost, but it had some constraint. Then we call complimentary constraint. Because of this special structure of complimentary constraint, the solution of the problem will be categorized into three cases. The first case is separating case. After the impact, the robots foot leave the ground. This is easy. And then clamping case. After the impact, robots foot stuck with the ground and doesn't move. In this case, after the impact, city becomes zero. And so case is just sliding case, the case robot slide after the impact. So for each cases, we can obtain the analytical solution impact contact impasse, and then because we have analytical expression, we can obtain analytical gradient with respect to some physical quantity. Okay. So for clamping case, we can see that after the impulse, the peruse becomes zero, the foot ferocite becomes zero. So any gradient contact erosit with respect to some variable will become zero, right? And this result in local minima, actually. So for example, let's say in this robot, the reference position is actually very up. The robot has to jump in order to match the reference position, right? But it does not jump because it is in clamping model. I started with the colapping model, then gradient die, okay? The desired solution is something like this, periodic jump to match to the left position. If you see the gradient, then you can see that there is a local minima on the laptop side of space. To solve this problem, the solution of a complimentary constraint, the space of that looks like this. It can be either potical line or horizontal line, okay? We actually approximate that space with more smooth and continuous space. Something like this. Then we introduce that. This is more continuous and smooth space by introducing the variable law, which is positive value. If law is very ragy, then it actually doesn't approximate well with the original constraint. But if law is very small, then it approximates very well the original constraint. We have actually adjusting now, how well we approximate the original constraint. So original lectic gradient robot tends not to jump. And there's actually previous method with some heuristis. This does not provide the periodic jumping behavior. So if you use our method, then it provide very periodic jumping and then it can reduce the cost a lot. Okay? And then if you look at this gradient, you can see that when oc zero, this is actually the same as original constraint. There is still local minima. On the left, as we increase the low value, it actually converted it to global minima, even though it starts with close to the local minimum point. But if you increase too much, then the solution, the quality of solution is actually become worse. Okay. So there should be some sweet spot over the values of low. If a pue too small, it does not jump. If a pelu too large, then it jumps too often. Result in bed cost value. There's actually pretty wide sweet spot. The probide pretty good real value, okay, cost value. Now we integrate this analytical gradient with differential dynamic programming. Info pass we have exact impasse model. In *** pass where we need a gradient, we use approximate smooth analytical gradient. Then we employ the multi pushing variant of DP to improve the cobotent property. So if you implement this algorithm to robot to the simulation model, on the top, you have a reference trajectory. There is no swing trajectory. There is no food contact sequence, but the algorithm automatically come up with a new context sequence. Here slippery surface, robot sometimes utilize the slippery surface, sometimes it propel so that you can catch up with the reference if required. Then this cutely motion the algorithm can come up with this nice cuty motion. Then we extend this idea into the three D model, we add some additional od that can prefer protein, symmetriate. That's only the added OD functions. Then you come up with this nice motion. Then for this unstable posture, this is actually a very unstable posture. The robot come up with these nice motions to catch up with this unstable posture. There is no equilibrium in this unstable posture. The only way to follow this reference is actually making contact. Then you can obtain humanoid robot motion. By changing the aprons. Now the robot only utilize its hindfoot. We implemented this algorithm to al robot. Of course, we did a tremendous amount of engineering to make this algorithm in al robot. I cannot be contained in the journal publication suddenly. But anyway, so the computation time satisfy the real time constraint. And this unstable pose can be followed. There is no equilibrium. There is no dynamic equilibrium. So that's why the robot cannot keep this posture, but it tries as much as possible to follow the reference tractoryRference posture, okay? So second approach, we are currently utilizing, it's actually reinforcement learning. And then we recently published robotics and robotics and automation megagin paper. So my student really wanted you to work on the reinforcement learning. And then this is his first controller. So this controller can come up with a four meter per sec first running. So four metapoet is already better than Buda based control, right? Bode based control speed timm speed was three meter per se. And it actually was able to handle the cono cases, even though it stepped onto the no moving part of the treadmill, it can recover from these co cases. But if you look at the gait, it looks strange to me. It looks not natural to me, right? I mean, animal utilize this kind of gait, you can see. I'm not sure why they are utilizing this skate. It doesn't look energy efficient to me and natural to me, but they say they exhibit this skate when the animal is happy, right? So, but we don't have a happiness view the function in our design. So to evenly distributed to to the symmetric foot, we add in gate preference in the other function, and then we can come up with a more natural gate which provide the faster speed. So the speed was increased to 5.16 meter per se. Okay? We have 1.16 meter per sec speed increase. To further increase the speed, we actually employ the motor operating constraint. So in the LD file, you can actually specify angular velocity constraint and then boto toque constraint. They actually separate. So they will provide a box constraint. But we are operating vision of motor space, is not actually box constraint. It looks like that, because of Bagga and resistance. If you have inductance in your model, then it becomes is, Okay? But we only consider resistance in our model and then we obtain this operating vision. So if you see this, okay, token angular curb, you see that it does not utilize much of the region, okay? So we implement the bottle operating region in our reinforcement learning, and we gain actually higher speed. We obtained the 0.5 meter/second increase speed again, okay? By just adding that constraint. Okay. And then you see that it more fully utilize the bottle operating vision. And then as a mechanical engineer, we have a choice of changing our mechanical hardware, so we change the foot so that it becomes more like it becomes less heavy. And then we obtain the 6.5 meter per set learning. And this is actually two year back. So we broke this code and then some other institution broke our code as well recently. And then the robot finishes 100 meter sects within 9.887 seconds, and then it made out robot to have kinss world recorded. I mean, yeah. And then the enforcement running algorithm come up with a galloping. So if you have a dog Wahed in your home. You know that they don't utilize trotting gate for fast motion, right? So we can actually implement the gloping motion, but sadly it doesn't approves faster performance. We don't know why, but as I showed you already, we utilize approximator algorithm for this robust walking on the mountain. Okay, let's skip. Mm. A so we are trying to implement this report running algorithm for climbing lob as well. If you use non magnetic surface, then if you use MPC algorithm, it actually fails, o? As you can see, because it cannot attach to the none surface. But in the eco case, you can see that even though it fails to attach its food, it can recover from it, right? If you use RL, even though it failed to attach to the surface, he can actually recover from it, okay? Then if you integrated this other algorithm with vision feedback, we can obtain this very gain performance. This is about four point ometer protect learning, the auto experiment. Then recently, many humanoid robot has been shown by many companies and industries. We don't have a humanoid robot yet, but we can show that just utilizing the actuated technology and then reinforcement learning algorithm that we have now, we can probably come with the humanoid robot control borism. Even though we don't have a humanoid robot, we can make the the robot work like humanoid without changing any hardware design. We didn't change any hardware design, but it can actually learn up to 3.5 meter percent now. Then you can recover from picking from the experimental. Okay The nice thing about this control design is actually it does not have enough degree of freedom for side kicking because we don't have a actuator for that, but you can actually recover from this kicking as well. Then you can also make the robot to lift some payload which is about 7.5 kilograms. Okay. And now we are actually trying to improve the cod quadrupeal robot running. Now the fastest speed we could obtain is actually 9.5 meter/second. And then the robot can finish the hundred meters print in 11.6 second. Which is probably faster than audiences here, including me, right? And then we are also trying to break the ecod human by Hussein Bolt. But as you know, yeah, we have a lot of failure moment, and then we are hoping that we can learn from something from this fail moment as well, okay? So with that, I want to conclude my talks, and then I want to express my uh I'm grateful about my collaborators as well, including SejonEojia tech, and then thank you for your attention. I think we have time for a couple of questions, maybe via Zoom or maybe some in person audience. We have a handful of audience here. So any questions? Yeah, I, wait, wait. Let's start from, let's start from the in person question, and then we will go back to Zoom. Right. You can ask Question. Okay. Questions about have you implemented any machine learning gology to come up with the design problem combination design problem over do you design and control design, right? Okay. Not yet. Not yet. I mean, we have not done yet. Okay? And then the difficult thing about how do your design work is actually, let's say we come up with some value and then we have to actually manufacture and then assemble the real hardware to prove its efficiency or its efficacy. Okay? So actually, we cannot have many cases to prove this work this method works. So it's actually very difficult. And then, uh I mean, the operation algorithm is not perfect. Sometimes it provide okay we are the answer. So I have not actually extended this walk yet. Okay. Yeah. Right. I think there was a Zoom question. So yeah, maybe we can start from the top. Yes. Okay, so the first question it's actually very practical issue. In order to, uh, be certified by Kennes World Record. You need resource and time and then effort. They are not very dispensble. Then you need some money actually, like publishing paper. So if you are planning to break the Kines World recorder, you guys can contact me. Okay. We actually consulted have a consultation with the Jonas anos group from Oregon State University because they broke the World Lecod of humanoid running, right? So we actually we actually took advice from them, and then we successfully contacted the Kyne Word Le Code. So it's actually a practical problem. And then even though you break the ecode once again, I don't think it's not related to the publishing another academic journal papers, right? So that's the reason. Right next is, I think Frank. Is it? I think so. Frank. Hey, hope you can hear me. Yes. Awesome talk, really awesome talk. My question is you switched from MPC to reinforcement learning. And so I wonder whether you were able to learn something from reinforcement learning that we can use in MPC world. So what is your lesson from switching to RL? Yeah. So I mean, theoretically, I'm not sure. I can learn from this reinforce running algorithm. But for practical reasons, I think in the world of NPC WA based algorithm, I think we need some way package that open source. I think suit of code, I think, to make them more widely used. Okay? So in the area of reinforcement learning, they have really good package, really good open source code base you can download and then implement it. I think that can actually spread they can actually spread the ages to more wider audiences or more wider engineers. I think we need that for Bod based control society more. And then RA algorithm, I think is really good at come up with some emergency behavior that was not expected by humans. So actually, when the robot collide with stairs, it actually changes the swing gate automatically so that it actually picks up the leg higher than usual and then to step onto the next stairs. Okay? So this algorithm this kind of behavior was only enabled by additional design of a swing tractor swing tractor swing trajectories, right? But in RL, you don't need to have explicit algorithm for this. It's actually embedded in your urotwork like weight functions. So this was not expected by, B. I hope this answers your question. Yeah, that's awesome. Thank you. Right. Yeah, let's do one more Zoom question and we will get back to you. One mechanism is keeping the quadruped attached to body c and inverted surfaces. So you're asking about the marble robot, right? So we use magnetic food. We use magnet, then the surface is actually consists of steel. So the robot only is able to work on magnetic surface steel surface. Okay? So I mean, people like they have actually complained when they see our PDO. Like, why don't you robot work on more piative surface, like concrete surface, something like that. But we believe that there's actually many industry application this magnetic worker because in many industry environment, for example, shipbuilding industry, the ship is actually made of steel, but they don't have any they actually still utilize the scaffold to, you know, people work on the ship surfaces is actually very dangerous and then very costly. So we are hoping that this kind of robot can be used in the shipbuilding industry like factory inspection consisting of a very rgy storage tanker, something like that. So Let's go back to one in person question and we will get back to another online question. Can you ask us question? That's true. That's actually very difficult to answer. I mean, I'm not sure there exist any optimal length of the lager to provide optimal behavior. It's a difficult question because the design parameter for Length of the leg is actually multiplied to the control toque, it's actually very complex bilinear mathematical problem. So I don't think there is actually true optimal for this problem. We have not. Thank you. Thank you for the to, can you please reiterate how you overcome your problem with symbol luck? Thanks. Okay, so uh the previous obtain problem, they reformulate utilizing the oil langers. We don't utilize oil lingers. We formulate all the cost function and then constraint only utilizing rotation metrics. Then this rotation matrix representation can be converted into the vector space representation, utilizing reparameterization. Okay. So that can make the problem become vector space optimization. Now you can utilize traditional optimigen soluble to solve this problem, okay? So yeah, that's how we avoid the Symbola problem. The next question is when the robot is at its max speed. Is there any control policy that enable you to stap safely and rapidly without breaking the hardware? Okay. I think currently, the robot can accelerate and decelerate in a pretty quick manner. We haven't measured the maximum acceleration, maximum acceleration to provide a stable transition. But now, it's pretty good. I just satisfy our needs. But I think another way to provide better dilation acceleration would be designing transition, contrary policy, which provide better transition, quick transition. But Actually, we need to move soon. I would like to actually, sorry, I believe you have many questions, but I believe the ejs question will be the last one. Any thoughts on other augmented NPC? Yeah. Hey, Helen, this is a very nice talk. So I'm just curious. Since you mentioned, the RO MPC model based approach is really good at giving very reliable motions, but maybe less, you know, versatile or emergent as what RO can do. And I'm just curious if you have any thoughts, you know, combining them or maybe or maybe for certain types of task, you just use RO for certain types of tasks, you just use MPC. Or you think maybe combined ROVs and PC can solve, you know, a lot of different kind of task. Okay, yeah, thank you for your question. So, uh, I mean, we have some preliminary work collaborated with the seventh group, as you know, we have imitation learning of MPC and then we train further that policy with reinforcement learning, and then we are still working on that direction. And then another direction we are taking actually kind of utilizing auto encoder structure, like foundation model. So if we have lots of trajectory data samples, and then we can learn from autoencoder, okay? And then to imitate it to actually sample the data samples, and then we can probably add reinforcement running fine tuning of the larger language model. Probably we can come up with some new controller. Actually, the vision feedback running that I showed in this top is actually controlled by the kind of algorithm now. But we haven't published any paper yet. We are working on writing paper. Right. Thank you for all the questions. And also, let's thank the speaker again. And yeah, there's one. Yeah. If you have more questions? Yeah, sure, feel free to reach out to me or the directly Professor Heron Park. And thanks for all the participation and see you. Alright. Thank you.