You hear me now. We're good. Thank you. Okay. Hello. Hi. Okay. I think we're going to start. So, first off, apologies. This projector is not working. We tried our best, but it doesn't seem to be working. So everyone sitting over here, if you wanted to try to move your chairs over, that might be good cause we just have this projector. Or you can just listen to Bobby. There's a few chairs in the back over there, too. Yeah. But you're gonna have a hard time seeing. Yeah. You want this screen? Do you care? I don't think there. Oh, you just want it lower. Can stutter. I don't need it. 'Cause I have I have my own preview here. Yeah, I don't know if There's some kinematic singularities. Yeah. In the table move? It's side down. Oh, it's rigidly attached. Alright. This is just my warning. Move now or forever hold your peace. Okay. So, introduction. So today, we have Professor Robert Greg visiting us from University of Michigan. For anybody who doesn't know him, I'm gonna do kind of a short little introduction. But I assume everyone knows him 'cause he's great. So yeah. Professor Robert Gregg has been the Associate Director of robotics at Michigan. You became it in the fall of 2020, which was before the Department of Robotics was founded. So he was one of the original founding members of the Department of Robotics at the University of Michigan, which was founded in July of 2022. Um, so prior to joining Michigan, he was my notes just went away. He was an assistant professor at the University of Texas Dallas, and then he received his BS degree in ECS, which is electrical engineering and computer science from UC Berkeley, and then his master's and PhD degrees in ECE from UIUC. So we were just talking. He's very multidisciplinary, has many different departments that he's been associated with, which is kind of, I think, more robotics, right? So his research is centered around control of lower limb robotic assistive devices, such as powered prostheses. I think we're going to see a lot of powered prostheses today, but it's also been extended to just autonomous bipolar robotics. Research is conducted through his lab. So if you want more information, his lab is named the Locomotor Control Systems Laboratory at University of Michigan. His work has been recognized by several awards, so I'm just going to name two, but the NSF Career Award and the NIH director's New Investigator Award. Those are both pretty big awards to receive. And his research has been funded by several large NIH ROI grants, which are very good when you're working with patients to have that kind of fund. So, thank you again to Professor Greg for visiting us today. We look forward to hearing more about your research. Thank you, everyone. Thank you, Megan, for the invitation and the kind introduction, as well. It's great to be here at Georgia Tech my first time and looking forward to talking to some of you afterwards as well. So yeah, I'm going to just jump right into it. So to start with some motivation for the talk, I want to first discuss challenges with amputee locomotion. So using conventional devices which are mechanically passive, this patient population tends to have difficulty with variable cadence walking, navigating ramps and stairs, standing out of a chair, and because the conventional devices are mechanically passive, they have to compensate with their intact side or their intact joints to make up for the deficit and mechanical power from the prosthetic limb. And this overuse of their intact joints increases the risk of osteoarthritis and back pain. So powered legs, on the other hand, have the potential to restore normative biomechanics by being able to provide positive mechanical work, active control, but they add substantial complexity. You have to measure things. You have to control things. There's compute in the middle. And so that has significantly limited the deployment of powered prosthetic legs commercially to date. And especially challenging when you have two powered joints like the knee and the ankle shown in this picture, which is the device actually that Aaron Young and I worked with back in the RAC days, which was fun. And so I'm going to talk a little bit now about how we tend to control these complex devices and um, typically, there's a hierarchical control architecture, high, middle level control approaches. At the high level, it tends to deal with the intent or the activity of the user. So you might need to differentiate between slow, natural and fast walking, but then there's also inclinations in the environment that you have to differentiate. There's also different types of inclinations like stairs, and then there's sitting and standing as well. And so typically, you have a different control mode or policy for every one of these possible activities, which is already starting out with a reasonable amount of complexity. And then for each one of those modes, you have a mid level controller which has its own discretization of that activity in two different phases of gait. And so for example, you might see this finite state machine here, which articulates the five common phases of gait, where each one of those may have their own set of control parameters like impedance parameters for compliant interaction with the ground or position, sorry, or proportional derivative parameters for swing phase. And each one of those has their own set of parameters that need to be tuned to an individual. And then every activity has their own finite state machine, which again, has its own set of parameters. And so already, we've now seen this explosion in the parameter space of these devices. You're talking, you know, over 100 distinct parameters. Not all of them maybe need to be tuned for every user, but a significant number of them do. And so in past papers, it's been reported that, you know, a well oiled team of researchers and clinicians may take 5 hours tuning these things. And that's clearly not a clinically viable option, right? There's also some challenges with the discretization of gait that makes it less smooth, compared to, you know, natural, smooth, continuous locomotion. And so my research has been focused over the past decade or more on trying to go from this discrete characterization of human locomotion to more continuous representation and allowing us to control amputee gait in a more continuous manner. And so we started this by looking at a continuous representation of the gait cycle phase. So again, that's that mid level I was talking about where we have the gait cycle. And so instead of having a discrete phase as a gate, we have a continuous representation that is measured in real time. And one way we do this is through measuring the residual thigh angle on the amputated side. And so you can imagine that that thigh angle is reflecting hip motion, which is, you know, under the volition of the user. And so that acts like the crank to the cycle, right? The hip motion is swinging forward and back. And so it's like a pendulum or an oscillator. And you app that to a phase clock that we see here. I guess you can't see my mouse on that screen. So, I apologize. So that allowed us to unify the gate cycle under one continuous nonlinear control approach. And then more recently, we've been working on trying to have continuous representations of the activity as well. So at the higher level. And so you can imagine that for any estimated value of the ground inclination or the walking speed, it would be nice if you could map that estimate, which is, you know, a real number or real numbers for these two variables to, you know, the biomechanically correct joint patterns. That might be joint angle trajectories. It could be impedance values, which we'll talk about later as well. But the idea is that we have this activity space, which is a multidimensional space, we want to be able to model how the joint patterns adapt as the activity varies within this multidimensional activity space. Doctor Greg, I'm sorry to interrupt, but your Zoom slides? What's our First talk about the sense of phase. So I had already said that, This is based on the residual thigh angle, and we can measure that with an IMU that's attached at the top of the prosthetic knee because that's then connected to the socket of the amputated limb, and it's not a rigid connection, but we assume it's rigid and it's close enough. Um, and so you see we have a descending region where we can map that to the first 50% to 60% of the gate cycle, and then you have sorry, first 40% of the gate cycle, then you have a kind of non monotonic region where things are less clear. So then we use feed forward or time based mapping there. And then we have an ascending mapping, and then again, another feed forward period where we are using time because again, we lose monotonicity and so we uniqueness of the mapping from thi angle to phase. And so in the end, we've mapped this sinusoidal thigh trajectory to this fairly linear phase estimate, which should be linear for steady state walking. And it's robust to multiple ground inclinations. It's stuck on Zoom. What's shared. Oh, shoot. Okay. Zoom are stuck on it's fine. Did it just happen or started that way? Let's try that. Better. Good to go. All right. So yeah, so we're able to get a reasonably consistent sense of phase. And this is also nice because then if people slow down, phase will slow down the progression through the joint patterns will slow down, and if they speed up, the progression through the joint patterns will speed up. And you can even do some things like stepping backwards, which we'll see later and so on. Stuck for me now, yeah. Alright. Hopefully, there. Yes. Okay, we're back. Okay. All right, so I would love to omize this thing, too. See. Hide. Nice. All right. We'll get through this together, okay? So technology, right? So he is how we use the phase variable. So during Sans phase, we have an impedance controller, which is emulating the behaviors of a spring and damper. But these are variable springs and variable dampers and also a variable equilibrium angle. So this is as called variable impedance control, which allows compliant ground interaction, which is important for comfort and also robustness to uncertainty with the ground. And so we parameterize we're going to parameterize models for these parameters as function of phase. During swing phase, I don't know. There you go. During the swing phase, we have position control, which is useful because it gives predictable swing foot positioning. You want to make sure that the prosthetic foot is ready to accept your body weight at the conclusion of the swing phase. So having this consistent mapping from hip motion to foot position is desirable for this application. And so here the desired angular position of the joints is parameterized by this continuous phase variable. And then we use constant proportional damping gains. And so together, we call this hybrid kinematic impedance control or HKIC, which is a term I'll be referring to later in the talk, so you might want to keep that acronym in your mind. So in terms of generating these gait models, we take a data driven approach. We have a parameter we have a parameterization that's based on mathematical formula, but we use data to train the parameters. And so, for example, we take a large dataset, like here as an example, we have walking over multiple walking speeds and ground inclines. And with that data, we're able to generate this parametric gait model using convex optimization techniques. And once we've generated that model, that's all offline, then online in real time, we estimate the gait phase and the task variation. So when I say task variation, I mean, what is the walking speed? What is the ground income? And then the gait model then will spit out kinematics or impedance profiles for the prosthetic leg to follow. And so this is one example of that. So we see here we generate these surfaces. And in fact, these are higher dimensional than three D, but I can only show you a three D plot, unfortunately. So what we've done is we've fixed the walking speed for this example, but there's also a walking speed dimension. And so for any, you know, measurement of phase or task variable, we have the generate we can generate the biologically inspired reference for impedance parameters. And swing phase, we do the same thing, but for joint angular positions. Okay. All right. And so when we implement this in real time, we have ways of estimating walking speed and ground slope using inertial measurement units. And so that allows us to have this continuously adapting walking gait as the treadmill speed or incline changes in this video, it's kind of a slow change, but trust me, it is changing. It's as fast as the him will incline. The device is able to continuously adapt to that. And so it's able to accommodate within some reasonable bounds, it's able to accommodate any combination of walking speed and ground slope any biobmtic way. And so in the end, this is showing how the prosthetic the recorded prosthetic joint angles and moments torques compare to the able bodied references over several ground inclines. And so these are actually measured from the prosthetic legs and coders and the torque we're commanding, so we know what that is. And this is just the offline data that we use to train the system. And you can see that we're able to reproduce these patterns. It's not perfect, of course, but the patterns hold that we see flexion increasing or decreasing as appropriate with the change in incline and we see similar patterns with the joint torques as well. Yes. References here. This entire row, this entire rows reference this entire row prosthetic. Then the colors represent different inclines. Yep. And so as a byproduct of restoring the somewhat normative joint kinematics and kinetics, if we look at joint work, we're able to show that we're able to, you know, either decrease joint work as incline decreases or increase joint work as incline increases. And so this is the total work of the two joints, in particular, we see that, you know, the knee should have net negative work and the ankle should have net positive work, and we see that. And then as a comparison point, AB is the able bodied reference. HKIC is the controller I've been talking about, and FSMC is just a finite state machine controller that we hand tuned as like a baseline reference controller. You know, it's our best attempt at a state of the art controller, again, it's not to say that others couldn't do a better job tuning these things, but it was our best attempt, right? So, and But in some cases, there's not a lot of difference between two controllers, right? Like, right here, there's not a lot of difference. There's more of a difference at the ankle, and then the total, of course. But something to keep in mind is that there was no tuning for the HKIC controller. It was all data driven with the able bodied references. There was no tuning at all. Whereas the FSMC control required probably 30, 40 minutes of tuning. Okay. So we've also been able to do continuous variations of stericent and descent. Using similar kinematic models for swing phase and impedance models for stance phase, where instead of, you know, a variable ramp angle, we have a staircase angle or step height. These are related things. And so as long as you were able to estimate what that step height or stair angle is, then again, we can update the patterns of the prosthetic leg. And so we're working on ways to do that with dead reckoning with IMUs and also some perception methods as well. And then for sitting and standing, we can also accommodate continuous variations. Here, the variations would account for different height chairs. Okay? So some chairs are taller than others, right? And do we don't actually explicitly parameterize the chair height in the model. What we do have is we have a slight change between standing up and sitting down in terms of the equilibrium angle we see here. So we have slight change here, which makes it a little bit easier to initiate the sitting motion because you don't want to have too much resistance to sitting. But the thing that really allows the different chair heights is the way that we automatically normalize this phase variable is we're able to tell what the starting angle of the thigh, you know, is for one chair versus another. And then that could help us normalize the thigh ranger motion to a phase, a variable between 0% and 100% or zero and one, I guess, is our convention here for phase. And so here's a demonstration of tall medium and short chairs for apitee participant. And here we're not pre programming it. It's all normalized automatically to the chair height. And I'm going to talk a little bit about outcomes later with a different study, but just as a little hint at that, we're able to increase the speed at which the participants are able to do sit stand repetitions and we're also able to reduce asymmetry between the sound side and the prosthetic side with the help of the powered device. I'll show you more data on that later. And so taking all of these, you know, kind of adaptable mid level controllers. So essentially, what we've been doing is I've been describing these mid level controllers that are adaptable to different variations in the activity, call these continuous task variations. This allows us to actually then reduce the set of activity modes required for classification. So we go from, um walking, you know, which previously had several different modes to accommodate different speeds and ramp angles. Now we have one walking controller that accounts for continuous variations in incline and speed. And then we went from having, you know, usually there's distinct sitting and standing modes in a high level controller. We were able to unify that as well. With stairs, technically, we're able to generate a single control model for stair ascent and decent, much like we did with walking. But you really want to separate those for classification purposes because it's too late by the time you've hit the ground with stairs, right? You want to you don't want to find out that you're on a staircase after you've hit the staircase. It's way too late. So we do actually still separate these in terms of our classification space, but they are technically the same model. Alright. And so then how do we do classification between these activity modes? Well, because we've been able to reduce this classification space to just four modes, we've reduced a lot of complexity. There's no need to differentiate between ramps and walking on level ground anymore, which is a difficult thing to do. It's a common source of air and classifiers. It's much easier to differentiate between walking and stare ascent or stare descent, same with sit to stand. And in fact, what we do is we use heuristic rule based class Heuristic rule based transitions, which are kind of, you know, hand design through biomechanical intuition. But these rules are very simple that we can explain them to our participants. They can learn them, and then they can then correct errors based on their understanding of these heuristic rules. Another thing in our toolbox that's really important here is some minimal perception of the environment. So especially for stericent transitions, it's in order to have sufficiently early transition, we find it very helpful to have some perception of the environment through an ultrasonic sensor that we see here. This is a really tiny little thing. It's, you know, milligrams and weights, milliwatts and power consumption and just a few dollars cost. The same sensors that are in our cars these days, right? And so this gives us the distance to the nearest object. And if we can consistently see that the nearest object distance is decreasing over time, that means you're approaching that object, right? And so that gives us that coupled with kinematic rules tells us when we have high confidence that there is a so cent transition happening. And so we're able to then actually have a sufficiently early transition stride to then you get proper knee flexion to clear the step and so on. And also, um, this allows us to do what we call ambilateral activity transition. So either the prosthetic leg or the intact leg can lead the transition. They're not forced to lead with one specific leg, which is common in commercial products. They usually have to initiate the motion with a specific leg using a specific type of motion. And so uh, you know, I'll quickly summarize some of our findings with a recent with a recent paper coming out, recently accepted paper at transactions on Robotics, that we're able to get about 99% accuracy for for the combination of steady state and transitional strides. And we have the recovery rate is 100%. And what I mean by that is we have backup logic. So this 99% number is for early detection of transitions. So an ideal detection where we can change the mode at the perfect timing to make those fluid natural transition. But you can also do a late transition, which is just not as nice. And so we have backup logic, also heuristic based, and that allows about 50% of misclassifications or maybe higher to be corrected. And then we also have a manual override sentially tell the participants to abduct their hips, and it will reset the classifier to the previous mode if it was incorrect, or maybe reset it to walking, actually. And so between those two things, the researchers never needed to intervene. In the experimental protocol. So it was always either the user's user self correcting through the backup logic or through a manual cue through hip abduction. So we also had two different types of experiments. So one was self paced, so, you know, comfortable walking speeds, just going back and forth between seated position, walking in ramps, sorry, in stairs and back. And so this is a more typical protocol where we just get a lot of back and forth trials. And also, you see here, we do both intact leading and prosthetic leading transitions, both for ASN and decent. Okay. Alright. And you can see the TV in the background is just showing what the leg is thinking is happening. And then so that's the self paced results. And then we also have what we call rapid pace endurance trials, which is, I think, more of an uncommon protocol, but really important for showing that this could work in the real world because when people are well rested and focused, you know, the results may differ than if they're fatigued and exhausted after walking all day long, right? And in fact, some prior studies have shown that classification errors can go up with time. And so we did a really gruing set of tests where they did this activity circuit, which includes the ramps. And we had essentially they had to keep up with a certain pace that was timed based on their maximum safe walking speed. And if they couldn't keep up with that pace for three laps in a row, they consider that a failure, and then the experiment ended. So essentially, as they tired, you know, they'd be more likely to either make errors just slow down because they're tired. And so we see one participant actually made it to the very end of our two hour experiment. We had to cut them off. That was 3,000 3,000 walking steps in particular. And you see you have almost 500 Sita stands and so on. So so one of them was very impressive, and the other one did eventually fatigue and did end the experiment, you know, before the cutoff time. But what's interesting is that we did not find any decrease in classification accuracy. In fact, if anything, classification accuracy got better over time because they got in the zone, and they were able to make more consistent motions and so on. So that was interesting. And then I already talked about the recovery rate that they were able to they did have misclassifications, they were able to fix them on their own. I will point out the caveat though that they did have handrails and stuff. So maybe it's easier to deal with those misclassifications when you can lean on something and, you know, maybe they're not quite to the point where they could just go out the real world and deal with those misclassifications yet. So we've been trying to expand our impact by deploying this on the open source leg, which is a project spearheaded by Elliott Rouse, my colleague at Michigan. And so we have provided some ready to use controllers on opensource leg.com that you can load onto your open source leg right now and use. And so that is the sit to stand and walk functionality. And more recently, we've been trying to do a transfer of these control methods to the OSR power. So we were collaborating with OSR. They were kind enough to send us the hardware and give us access to the firmware so we could reprogram the controller using our methods. And so we just completed this study with seven above knee amputee participants to investigate the clinical benefits of our controller compared to the stock controller and the users take home passive prosthesis. So this shows a demonstration that the same basic functionality exists for little choppy. I'm not sure what's going on there. Between the two devices. So this is on the left is our in house hardware. Much bulkier, less pretty compared to the Power k on the right. Now, what's really, I think, unique about our approach is that it has this phase variable encoded into it. And I'm going to down a bit. And so what you're seeing here is that as he opens the door, he needs to move his leg back out of the way to avoid hitting the door, right? And so that's his hip motion, driving the phase variable, which then tells the swing phase to back up. So actually, this is a backward progression through the gate cycle that allows him to open the store pretty conveniently. And then we can also do things that were not explicitly designed, such as, you know, crouching, tying your shoe, and then standing up. This is all just the sit to stand controller. Alright, so in terms of results, this is a quick preview of some of our findings with this study, which you're the first to see it. So this is we increased so the HKICPower again is our controller on the OSR Power knee, which, again, the Power Nne is a commercially available device. Um, so we increased the number of repetitions during a 32nd sit to stand test. So the idea is that, you know, do as many reps as you can sit to stand 30 seconds. And then we repeated that ten times ten sets. And so we increased the number of reps by 12% compared to the passive leg, and increased it by 29% compared to the stock power new controller, which does mean that the stock controller was slower than the passive condition. We also reduced ground reaction force asymmetry between the sound side and the prosthetic side by 31% compared to passive and 14 5% compared to the stock controller. And what's interesting is that if you look over time here, this is the X axis and set number, so time by sets, discrete time, I guess. We see that actually, they improved their performance over time with our controller. Which comp whereas the other conditions they either stayed flat or actually got a little bit worse. I don't know if that's just learning and becoming more familiar with a new controller over time or if it has something to do with fatigue. I haven't really teased that out, but it was an interesting observation that was unique to the HKSC controller. For walking, we showed that for, you know, clearing for a fixed speed, we were able to increase toe clearance by 23% compared to passive and increase it by 6% compared to the stock controller. And then we also, I think this is one of my favorite results, we reduced the peak hip flexion moment by large amount by 27%, compared to passive, 23% compared to stock Power knee and that's what we see in the plot here. And why that matters is that's that swing pull off moment to initiate swing. And typically an amputee gait above knee amputee gait the patients have to use excessive hip flexion to cause the knee to bed B if the pass the device, the knees not going to bend itself, right? So they have to really kind of whip their prosthesis to cause the knee to bend. And that is an overuse of their amputated side hip, right, which can lead to hip pain, back pain. And so we were able to reduce that moment by a decent amount through what we think is through better energy injection and synchronization with the gait cycle. Okay, so that's the preview of the PowernRsults. And so now just to change topics to exoskeletons, we've also been working on partial assist exoskeletons for broad populations. So we're working on modular devices that can be used at different joints or different patient populations, but focusing on individuals with mild to moderate impairments or actually maybe no impairment at all, because it could be just fatigue due to repetitive tasks like in a warehouse. And you can see that the numbers here are huge in terms of the implications for partial assistive exoskeletons. And so our objective with the hardware was that it should be lightweight, should be back drivable, which means that the human joints can cause the robotic joint to move freely, right? So we want to facilitate and support volitional motion because we're dealing with mild to moderate impairments. And so vectorability is really key for that. And we also want Comfort is an obvious one, and then generalizable or modular, meaning you can apply it to different joints in different use cases. And so we have what's called the M Blue system, which is, you know, a fun name to have M and blue in it, right? And this is based on an off the shelf actuator, which has now become quite common. I know that Ans group uses it, and other groups do as well. The T motor AK 80-9, it's a fantastic motor, has an integrated planetary gear set with a nine to one gear ratio. That can provide 15 to 30% of the biological torque of the joint depending on what joint it is, how heavy the person is, what activity they're doing. So it's a range. And then we have these different modules where the hip and the knee are built from essentially our aftermarket modifications of commercial braces. So we buy a T Scope brace from Amazon, and then we replace the joint and the uprights with some sheet metal parts so we can either laser cut or water jet. And so the assembly process is very fast and cheap and so for the controller, our objective was to have it be versatile, a able to adapt to different activities to be predict predictable and safe so that it doesn't have unexpected oscillations or unexpected behaviors. We want it to be generalizable, customizable to different users, and we also of course, should be effective, right? Should be effect at something, whether that's reducing effort or improving performance or mitigating deficit. And so our approach to doing this is we take a A non linear controls or dynamical systems approach, which is related to passivity based control. So the idea is that we aren't going to enforce any trajectories because trajectories would inhibit volitional motion. We don't want to prescribe the joint pattern of the user. Their own joint should be doing that. Instead, we want to change the dynamics of the body to make it easier for them to move. Reduced effort, right? And so if you think about the human robot system, the coupled system here, you can model it through Lagrangian function, which through the oil and Lagrange equations, gives us these canonical equations, which I'm sure you can't see back there, but you can check out papers if you'd like. Um, and then we do is we have access to this control input at these joints, for example. And so we close the feedback loop with a control law that changes the closed loop dynamics, okay? So once you close the loop, the dynamics are now altered. So, for example, maybe we've reduced the perceived gravity of the human or perceived inertia or stiffness or something like that. And so this can be done in an energetically passive manner, which means that in the continuous time dynamics, you're not injecting energy, but you can inject energy through the hybrid dynamics through switching. For example, if you switch if he's changed the set point of a virtual spring, right? You can inject energy that way, but it would be like a transition of, like, going from stance to swing. Okay. And so it's locally passive, which has benefits in terms of safety and predictability. And so, examples of the types of features that we can design into this framework are gravity compensation, inertial compensation, and virtual springs and dampers as well. So now, we have a relaxed version of passivity, as well that allows some energy injection in the continuous time dynamics. I'm going to kind of skip over that, but we do have relaxed forms of this that aren't so strictly passive, and that has helped us do things like predicting human joint torques from biological data. So what we do is we optimize over an energy shaping basis, so essentially a set of nonlinear functions to predict normative joint torques given normative kinematic or ground reaction force inputs. And this is over a multi activity dataset. So you can see here that we have ramps, we have stairs, we have level walking and sit stand and for the knee and for the ankle. And so essentially, what's happening is instantaneously we're mapping the kiomatics and ground reaction forces to the biological joint torque or at least our best estimate of that. And so that way, then we can provide a fraction of that biological joint torque to the human to reduce their effort at the joint. And so this is End to end. It's not in the same way as a neural network is, but it's end to end in terms of taking inputs, directly mapping it to outputs. There's no classification involved. So and so you can see that it's a modular framework as well. So we have a modular basis that can be used on different joints. I know there's a lot of videos going on here, but you can see it's a hip, we have a knee, we have an ankle, we have unilateral, we have bilateral. And so this framework can be used to design any joint configuration. And so then we had a recent study using this framework or at least a control method that's inspired by this framework to try to mitigate fatigue during multi terrain lifting, lowering, and carrying. And so the objectives here were using a knee exoskeleton. We wanted to reduce the contributions to quadriceps fatigue over this multi terrain lifting, lowering, carrying circuit. The idea there that if we can reduce the contribute if we can reduce the contributions to quadrocep fatigue, maybe we can prevent or delay fatigue in the first place, right? But why do we want to do that? Well, because when people are fatigued, they tend to take riskier lifting um lifting postures, like scooping, lifting with their back instead of their knees, right? And then people get injured. So we want to prevent fatigue in the first place, if possible. But then let's assume that you are fatigued. Let's say you've already gotten there because you've been working in the warehouse for your eight hour shift. Then are we able to mitigate the deleterious effects of fatigue? Okay? So we tested both of these things in this study. And so first to the first objective, we focused on quadriceps EMG effort, which is essentially the integral of muscle activity averaged across all the quadriceps muscles that we measured, three muscles. And so we were able to show that um we were able to show that for all the activities, the average muscle effort decreased with the exoskeleton compared to no exoskeleton, but that was not statistically significant for level walking. It was statistically significant for all the other activities. It's not surprising that it wasn't significant for level walking because the knee the quadroceps don't do very much during level walking anyway. So the baseline is so much lower. It's hard to reduce that. So we weren't surprised by that, but the real benefit is during the harder activities, like lifting lowering and assent decent tasks where reducing hydrogen effort is valuable. And then this video shows a repetitive lifting and lowering fatiguing task. So the idea here is that we had them. It was not a fun experiment. We had the participants do repetitive lifting and lowering as quickly as they could until they couldn't do it anymore until failure. So, you know, their quads are burning. They can't do it any further. And then and then we have on the left, we have not wearing an exoskeleton, and on the right, exoskeleton is donned, but it's passive. It's in its zero torque mode. So it's not helping maybe resisting ever so slightly. We also accounted for that in the analysis. And then once they declare failure that they can't go any further, then we give them a second to compose themselves, and then we activate the exoskeleton and we say, Okay, go again. Now do ten reps. Do ten more reps as fast as you can. And you'll see in this video that the exoskeleton, once we turned it on, did really help them. So this is the fatiguing process. This is the not fun part. Takes a while for some participants. So here they're beginning to really hurt. And then they say, Okay, no more. Okay, so they can pause and compose themselves until they're able to do the next rep with proper form. With exocale it's inactive. The participants a little more confident, so they maybe don't pause as long. And then they're able to knock out those ten reps very quickly. All right, so just for the sake of time, this is a easier way to look at that in terms of the data. So this is the complon time increase percentage on the left plot. And you can see that, you know, some participants had, you know, this is a percent increase relative to baseline, relative to pre fatigue. Okay. And you can see, some participants understandably had a very large increase in completion time to do the lifting lo cycle when they were fatigue, right? Very understandably so. With the exoskeleton on, they actually return to their pre fatigue performance level or very, very close to it. I think it was off by, like, one or 2%. And so at least in performance, we were able to mitigate the effect of fatigue. But also, we looked at the postural changes as well. And so the amount of stooping, in particular, is of interest. And we were able to show that they were able to reduce the amount of stooping with the exoskeleton compared to no exoskeleton after fatigue. So that, again, guarantee that they wouldn't injure themselves, but it's one of those things that suggests that the lifting posture is a safer more proper form. Okay, so now, wrapping up, I want to talk about some of our ongoing work that's hot off the press. So we just got a paper accepted at ICR related to reducing pain and kneeostearthritis. And so hopefully I'll see all of you at ICR in Chicago. And so we did a four participant pilot study where these individuals had chronic kneeoseoarthritis, but a specific form of it called patepemeral kneeosteoarthritis. They may have so the knee has three compartments, the patepemoral compartment is essentially the kneecap area, and existing OA braces don't really address pain in that compartment of the joint. That compartment pain is exacerbated by quadriceps force because it's sutily pulling a tendon across spanning that kneecap. And so so traditional OA braces can only do mediolateral adjustments to deal with the mutilateral compartments, but they don't help with patel femoral pain. And so here we targeted patients who have diagnosed patel femoral osteoarthritis. They may have additional compartments inflamed as well, but they at least had patel femoral and it's an unaddressed population. And so you can see here that over all these activities, the primary activities of daily life, we were able to consistently reduce pain reported is perceived pain of the user with the exoskeleton compared to no exoskeleton. And the hypothesis here is that we're reducing that quadriceps force that causes a load on the Patel premal compartment. And we're also able to show through inverse dynamics that we reduce the joint moment as well at the knee. So that helps explain why the pain reduction is happening. And then we also extended this to the hip with a pilot study of three participants, where a very similar effect was observed that pain was consistently reduced to cross activities and with exoskeleton. And we have a similar hypothesis. I mean, the hip is, of course, different than the knee. The sources of pain are a little different, the hip is a ball and soca joint, so it's very complex. But here, we're just providing flexion extension assistance. So we're targeting just one degree freedom of the hip. But that alone was enough to reduce pain considerably for our participants. So excited about sharing those results at ICR. And then we also have my last slide. We also have an ongoing clinical trial on mitigating age related weakness. And so this is also known as sarcopenia frailty, where this individual in particular is unable to do a step over step stericent gait on the stairs, which is the more natural gait because of quadricepts weakness. So he has to do a step to step gait. So you'll see this kind of, you know, shuffling up the stairs. And this is his first try with the Nooskeleton. And suddenly he's able to do the step over step. So that was a very cool quote from him. He was excited about it because he hasn't been able to do that for many years. So that's an ongoing study. Hopefully we'll have some papers on that soon. And so, yeah, in conclusi, I just want to thank you for your time. And if you have any, you know, interest in seeing this work in more detail, we have a YouTube channel as well as the lab website. I just want to thank, of course, the students and postdocs who made this possible and happy to answer any questions. A. Okay, I'm gonna be running around with this little microphone. This is what we have today. So if anybody has any questions, raise your hand. I see Veron, then I'll come over here. Oh, thank you for the talk, by the way. When you collect data from the human participants and use that to train the trajectories of the prosthetic, how do you normalize for, like, the able bodied participants having different body geometries? Yeah. Okay, so with impedance, we can normalize the impedance parameters by body mass. Torque is sensitive to, like, body mass, for example. But the kinematics, like the joint angle trajectories, those are not sensitive to body mass. I mean, of course, we all have different kinematic patterns, but that you cannot regress as a function of body mass or height or other geometric properties that I can measure. So you just stick with the average joint trajectories, and it's a good starting point. It's not optimal for anyone, right? But it's a good starting point. We do have a clinical tuning interface that lets us using, like, um um, we can, like, drag points on the trajectory change things do it that way. You could also just tell them you can tell the clinical interface also allows clinicians to increase stiffness at Heelstrike and features like that. So we have a way of tuning it, but at the moment, we don't have a good way of making it subject specific just through their geometric properties. Okay, you quit. Thanks for sharing your research today. I just wanted to ask about the um uh, what do you call it the lifting test you did with the exoskeleton. Would you see any benefit in using like a placebo group where you tell them that the controller is active, but it's not to, like, separate, you know, the physical and mental aspect. Yeah, yeah, improvement in exercise. Yeah, it's a great question, and I will freely admit, like, this is not a perfect experiment. It is really hard to, like, control for all these things because people know when they're wearing an exoskeleton, to answer your question directly, they can tell when the exoskeletons doing something. There's no hiding when there's a torque applied to you. So we thought about that. Yeah. And you might also wonder, why didn't we, like, have them fatigue without the exoskeleton and then put the exoskeleton on so that the fatiguing process was the same two sides between the two conditions. The reason for that is that it takes like 2 minutes or 3 minutes to don the thing. And by then, the acute fatigue has passed. And then they can, of course, they may have longer term fatigue, right? But it was a really hard experiment to design, yeah. And yeah, there's potential sources of bias due to their knowledge of what's happening. That's true. Good question. Another one. Professor, I just had a question about you talked about lifting posture deteriorates as people fatigue when lifting. Did you mandate a certain lifting posture when they started? Yeah, yeah, we said they had we talked to them about the difference between a squat lift and a stoop lift. We told them, maintain a proper squat lift with your, you know, back as vertical as possible. And so, yeah, we did have a Oh, one thing I forgot to mention is that every participant in this study had prior experience with squat lifting in the gym, for example. They weren't necessarily warehouse workers, but they were people that had done squats in the gym. They were familiar. That was our best way of getting closer to the population that matters for this particular use case. Go cha, can I ask a quick follow up question? So when you assist at the knee, do you think if you didn't cue for a person to have different lifting posture, do you think that assistance profile will change how a person default wants to lift using their knee versus using their back? So you're talking about the system profile from the exoskeleton. Yeah, let's say that we're an exoskeleton. Do you think that'll change their lifting form because they have a different kind of energy expenditure at the different joints? Well, it did change their lifting form in fatigue. But if I go back here, let's see. Did it change them before fatigue? Uh, kind of hard to say, right? So, this negative region of the trial is pre fatigue or before they declared fatigue. Zero is when they declared fatigue. I probably should have explained that. Anyway, this is before they declare fatigue. And so there's not a lot of difference in the amount of stooping between the two conditions. But then again, it's not off. So it's off. Okay, yeah, yeah, yeah, yeah. I guess I don't have an answer to your question with Exelon. Yeah. Thank you. I saw this one next, and then I Thank you. All right. My question was about the ICR paper with the knee xo might be a bit quicker, but were there any modifications to the controller between lift load carry and reducing pain in this case? Yeah, so we did modify the lifting lowering functionality to be better for sitting and standing? They actually are very similar. The range of motion of the needle is similar for lifting lowering versus a squat lift versus a sit stand. So we did make some modifications for that. And there might have been one other modification is eluding me. I'd say these are minor modifications. I would not call that the contribution of the paper. But what was nice is that the controller was customizable to this population, which I think worked to our benefit. No. And you one more question. I'm sorry if I missed anyone. This might be coming from unfamiliarity with ne osteoarthritis, but is it usually bilateral in the sense that were you giving assistance to the same extent and did they have pain to the same extent? Do you know if that would affect things? Right, so yeah, some patients have bilateral OA and some don't. And so what we did is we put the exoskeleton because it can work actually we use bilateral. We use a bilateral exoskeleton for all patients because the controller at the moment, involves bilateral sensing. I do have a student working on a unilateral version of that. But for this study, it was bilateral for all patients, but some only have unilateral OA pain. Does that answer your question? Yes. Oh, okay. Sure. Yeah, yeah. So you're saying, maybe an exoskeleton can help compensate for the overuse of sounds or intact joints, right? It's a really interesting point, yeah. But yeah, if you use an exoskeleton for that, you could in theory, have the prosthesis communicate with exoskeleton, and then you can do some really cool stuff then. That's interesting. Thank you, everyone. Let's thank the speaker. If you have more questions, maybe just ask him after, but thank you. Yeah. Um.