Some of the signs are right in the back. Okay, cool. So my goal today is really to try to give you an overview of what's going on. Agitator entry. Basically make sure digital rise also known here. There's some cool stuff. We're more focused on field robotics, and that's what I'm going to present. Also some stuff related to environment science. So just let's, let's get started. So I started with some context. So by training, I'm a software engineer. And I said I think I don't have to say anything there. But mostly I work on different field application. So trying to get robots to do stuff quite an applied guys have easy links with the DRI, for instance, because we tried to offer this aspect of getting things applied. Yeah. So she said everything. I don't have to spend their time there. So what I want to do here, I want to know, due to know what's there a digital run. Because one of the side effect is to actually see if some of you want to go and spend some time over there. I will show you some of the hardware. We have some infrastructure, we have some of the people we have as well. And then spend some time on the research I'm doing and conducting their lot on environment and natural environment mapping. Some on automation for Environment Science, and a bit on robotics as well. So let's get started for some hardware. So we have, but if you Robert three, focus on getting our door. I mean, the bottom ones are two main research tools are the little boat and the husky on the right. On the top, we also have an arm, but you have like thousands of them here. And we have plenty of thought about how we use mostly for teaching. The thing I put on top right is a lighter tracking system. I put that here because since few cents a year now we have also all the tools you need to do branches in outdoor settings, ground truth for 3D localization, ground truth for 3D mapping. Things get, get you points over a kilometer. So it's really, really nice. And the thing on the right, I'll just talk about that on the minutes that Robert the drives on walls with magnetic wheels. And she was mostly for inspection. So he had a tiny wanted or testing ground at the university, but the goal is to go for large tankers, storage tanks. I'll discuss that in a minute. Quite a bit of hardware. So localization, just got to know that we have stuff, okay, come GPUs, actually, GPUs are much easier to use over there than here. From what I've heard. I understand that while it's easy to do when we want to reserve them, just write your name into a spreadsheet and without that. So yes, good argument, I would say, well, right now we have run nine Petitions is something that is changing. But just give you an idea of the of the state of the project. We do run 1020 semester project per year. The semester. And so Massachusetts as well. So just okay, that's give you an idea of scale. Now, let's, that was all for the contexts. Let's talk about research activities. So I'll start with some product and robotics. And I'm not sure, I'm not trading in the lesson there. I start with a bug right to product, but right to is, is a product on ship hull inspection. It's one of his big EU project. I don't know exactly what the equivalent in the NSF landscape. It covers 21 partners across, across Europe, around 9 million. And we according the coordinating that the goal is to build the tools that can work towards the inspection of large ships. And in this type of project, we tried to get the food value chain. So we get some sheep owners. Some people work with some legal expert. Some people bidding robots. Some people brining services. So all the chains. So I wanted to show these pictures because I find them quite interesting. What you see on top is a big tanker while being inspected and cleaned for scale. The little dot here is a human. Okay, let's see the mouse be easier. There's a human there. Just to give you an idea of the scale. This is scaffolding, a five story scaffolding. There's not a human, they're just gives you the size of these things. So they need to be inspected regularly. Obviously, if you want to dry dock these guys, it's a huge cost or it could be done with our dry docking them. It will be much easier. So we're looking at bringing IT robots on the hell around the health flying for the areal part under the water systems. Okay, 21 partners, 1111 universities across Europe trying to work on that. Coffee didn't really help. But okay, so just a few shoot test we did at the beginning when we first started. Just give you an idea of the problems we had this product here driving on the wall. It's a bit weird, but not just have to see magnetic wheels are just big, cylindrical, cynical magic magnets and 3D mapping tasks. So you can see that. It's a pretty featureless environment. But there are things to do. Obstacle about is interesting. It's a, if you want to look at the math side, it's, it's a 3D manifold. It's a 2D manifold embedded in 3D. So you can do some kind of nice localization and planning task. On that. The mapping is mostly with lighter because it's so featureless. But there's some interesting things you can, you can do in there. So what's the first test? Now the real challenge is how to localize in there, how to control, how to plan. Because you really have to account for the fact that it's a big, smooth face. And that's for just for the crawler, which is what we're doing. So the crawlers magnetic field robot. At the global scale, the problems that we have, this heterogenous team of products, UAV, a UVs crawlers. How do we coordinate all of these teams, even if you have multiple of each team, how do we distribute them across this big surfactant? That also how we bring in a try with Immersive Interfaces and how to include that. So we do something which is quite fun on disrespected that we bring in psychologist to help us design this HDRI that tried to take into account how to create trust into the system. Because at some point someone will have to sign off. Okay, this ship is sea worthy. And when you sign of that, you're taking some responsibility. So they had to trust that the measurement has been done. The regulation says that all the measurements have to be done at hand, reach that the definition of the, of the scale. And so we also work with legal expert and try to work at the UN level to change a policy. It's tax time. It's very slow process, but we have some experts from the International Maritime Organization and we can we can get access to UN level and also inspection technologies. I'll discuss that a little bit. How to work with acoustic waves. There's some funds and things to do there. Some other examples we did. So that's robot on the left here. That's a storage tanks, 20 meter high, 40 meet in diameter. It's not as big as a tanker, but it's significant, is quite featureless. So I think one thing once on, some of the things that we'd like to try there is maybe to use photogrammetry and some albedo type of mapping that may actually be useful. And what you see on the right is actually video. It's one of the most boring video. Inspection, inspection videos. When they're working, they're boring because nothing happened, everything works. Okay, so you have a robot that makes a straight line, okay? That's all. Actually behind that. There is a cruel location system which is constrained by the manifold. There's the bit of control system that takes into account the shape of the, of the environment and so on. Okay, just quickly going through all that. You'll see, I'll jump topics from topics, the topics I hope you don't find that too confusing. At the end, we can ask question on specific topics, if you will. And move to something related to reinforcement learning. And something called dreamer. Mean you may have known dreamers. Dreamers, I think comes from the Google, Facebook. One of these guys. Google, yeah. Okay. So Julius, pretty cool. It's mostly looking at papers most in simulation. So we tried to see how we get dream it work in a real environment. Okay. How do we perceive that? We have papers on that? I'm not going to go into detail also because I'm not sure I'll be good enough to give you the details. It's a PhD student work. But I want to show some of the ideas. So we'll look at tried to apply that on the boat. We have basically a GP smoke XIII bottom, exactly a cup board on board. And we look at the task of shore following looks like reactive navigation. With such a system, it's very, very dynamics got a lot of inertia. The water is very non-linear. So we need a way to standardize the controller that is robust to different conditions. What you don't really see on the right, it's actually not that clear Is that institution with the lake is actually frozen. So the scene layer of ice in there, we still want to assistant. Well, like I said, small enough that we can actually drive steroid cream crushed. But it breaks the dynamics when tried to turn to want to build robust controller. And that where reinforcement learning and it's specifically dreamer is working. So in this case we tried to get, we have one paper and how to build a curriculum to get the system to learn by creating more and more complex situations. Also, how to attack the dynamics. How to create more and more complex dynamics is that a dreamer can get better and better. We also do zero-shot learning. So which mean that we learn everything in simulation and then test on the real system deployed in the real system. Testing and rehearse them wouldn't really work. Dreamer because it's so simple, efficient is actually quite a good, good too. Some of the, so it's still pipeline processing. It's one it's around one day of train of training. The question is how to get a seem to real barrier. And that's quite challenging. So I'm just going to go into details of how a Dremel works. Either you know it or you don't. If you want to know more, I need to look at the papers. What I want to look at some of the results, guess what? One of the lake on which we are testing a kilometer of, it's a kilometer of lake shore and all driven by dreamer. Now we're good. We got kilometers of going around there. So it's pretty stable. Some of the Challenger this, if you don't have a good control or your country do the tight turn on the The left there. You really need to anticipate that because you need to, it's bit like drifting on a car. The car from aerospace. It's bit like drifting, but it's much, much slower motion. The interesting thing is, we train dreamer purely in simulation with a model which is what it is and assimilation gazebo. So it's not, not a great simulation, especially for hydrodynamics. But again, we tried to make it to rechange conditions. We make the water therapy. We tried to make go to add wind to, to break one of the motors to two also make the robot starts at very weird places. I go through these three graphs that are measured on the real system. The task is to follow the shore at one meter per second. At ten meters from the shore. On green, It's an idea or deployment. So you basically train dreamer, deployed on the system and, and let it run. And you see that basically the task, even though the task is learning simulation is done very well. When we add the ice, distance control is still doing fairly, fairly well. When we remove any notion of knowledge of velocity, that becomes a bit harder. Systems tends to overshoot the distance and the velocity, but it's doing okay. In terms of velocity. Clearly when you have ice becomes harder. Okay, So it's slowing down. We don't want it don't tell it its velocity. It tends to overshoot and always drive and not full speed, but it has some knowledge of how to work. What's interesting is that we get the controller, which is much more efficient in power. That actually complete side effect. We, it's not something we planned at all, but it's much better than any of the other controller we have if in terms of power. Okay. I'm just giving you a broad picture there. I mean, if someone's interested, we can discuss more or I can refer you to the work as a student. Some of the things we need to do to had some kind of safety layer, add more interaction with agents. Especially the swans are pretty good at attacking the boat. When I was very smart. They're not attacking like physically. We've been doing tests very regularly and they've understood that somebody is following the shore. So if understood that if they approach a boat from the outside, It's a boat, we must ignore them. But you have to approach it from the inside than the boat will actually try to avoid them. So some of the ones I've learned to actually come inside and peel the road field, peels a boat off the shore, ticket to the middle of the lake and then fly away. That's no animal learning much bears and machine learning. They also wanted to show is a bit of Isaac. I don't know some who've worked with Isaac. It's amazing. If you work with gazebo. Before. I mean, we see, well we put a few trees and then everything slows down. With Isaac, we put 10000 assets and everything flows at hundreds of FPS. And it's beautiful. So at now we're moving a lot of estimation on Isaac. We can have a lot faster simulation, a lot more agents. There's really cool stuff there. And it's linked to a tourist. So what you see on the right is the RVs perception. So it's directly out. So we didn't brush better shape. You need to interact a bit on the forum with the designers, but it's really, really good, really, really fast. Okay, moving on. A little bit about exploration. Ready to RL as well. Try to work also with natural environments. So how do we map? How do we explore in a natural environment? So here we have, on the left we have the ascii moving around. With these trees. I think it's a full on the right. This gazebo in this case, with the wisdom of dead trees, kind of hinterland. And actually from perception point of view, it's a very similar environment. So what you get from a lighter when you get from the Aster, hostile 16, this case is very similar. So the question is, how do we use this information to build a map and to drive an expiration policy? What is a good-quality map in this type of environment? Is, if you look at all the papers, they talk about maps and looks at the Mosley surfaces. Assume everything is dense. When you get a lazy when the laser is hitting something, is eating a wall, you're looking at a line. If look at trees, I like to call trees semi-transparent fractal structures, but it's treated as ice, so smaller it's easier to say. So they're really hard to looks to work with with a lighter. Sometimes the light, as the light goes through, sometime it doesn't. You're very sparse map India. So we tried to see how to explore that, how to call it first, how to quantify the quality reconstruction. It looks like a trivial problem. Once you've got an optimal map and you have the ground truth, how do you say, Okay, this is a good reconstruction, is it a bad reconstruction? It's not that easy. We looked at different solutions, ended up with something called Wasserstein distance, which is variationally us move a distance. So it's a way to, to get the distance between two distributions. We also tried to show that we can create proxies that can be used to predict that the distribution, it will be good. So Hadoop, first, how you measure that the restriction or construction is good. And second, can you get something that can measure in real time without ground-truth and tell you, that tells you okay. Now, normally my distance Microsoft and should be good to have enough measurements. In the end is fairly, the goal is to link that with Miserables type of exploration policy, but this is ongoing. In the end. The results are somewhat intuitive. But we can, we can prove in this case that if you get a good spherical violence of the viewpoint and enough points, then you will be, you can get a threshold above which you can somewhat guarantee that reconstruction can be good and you can, we can have a way to estimate the threshold. Okay? Somewhat intuitive, but it's nice to be able to prove it and to get some statistical testing that shows what is the right threshold at which it becomes a good quality reconstruction we can predict it's a good quality, even though there's no way to get the ground truth in a natural environment, even with a fan fancy like light or we have an I showed at the beginning. You always have occlusion is super hard to get ground truth. Just last thing on robots as a starting. So I don't have results and data will have. But we are trying to start looking at gans for generating plants, again, for planning. I don't know where that goes, that something that is B, calibration with cut-through. And the goal is to linked GAN for planning, scene understanding with AI and to actually do some task where you can work on things just looks all the same. And in particular here we look at the servo motors. They're all somewhat similar, but they're all somewhat different. So at this stage you can't recycle them, so you just have to crush them, which is recycling, but it's not a good way to do it. So if we could get ways to generalize, and that's the point again, to create a plan for this assembly than maybe there's a chance to recycle them in a better way. Okay. She's a dream that says this kind of dream you sell when you when you want to grant. Now, we see where we get. Not somebody works so that we do what we, what we say. But at some point we have to sell some dream. Natural justice is just starting. I move on a bit on mapping and slam and start with something which is very weird. Dancing is something where we nearly have auto paper published because we are the only one doing it. There's another lab at USC which does something which is vaguely similar, which is mapping in middle plates. So what do we have? Where if you have robotics problems that you have a transducer in their acoustic transducer. It's like it's like hitting a ball, but you hit it at a 100 kilohertz. You get this kind of signal there, where the waves which are a bit stronger correspond to equals from plate, from the side of the place. And he has a waves tend to behave with that. So I'm going here, it looks like wave in water. They are but ten nanometer high. And they move at about 300, 3000 meters per second. But part of that is bit like mapping a pond from the, from the waves in the water. And actually we can run somethings is fast lamb running on there. What you see on the left has tried to reform fast lab is we are reconstructing, sorry, need to play that again. We are reconstructing the shape of a plate. So what you see here are edges. So we are moving on a plate, detecting a gross reconstructing the Treasury that's faster than what you see here is some kind of beams, beam forming map that you can get by mixing all these ultrasonic signals. So bit weird and random. No, not at this as an application, but it's interesting, we can, just by moving around, we can get some estimation of what is the shape of the blade. And that's on a real robot, so it's just a test plate we were working with on the lab. What you see here, you see the real signals in there. Some of that is r equals the waves actually bouncing back and forth on their plates, very noisy. And what you see here is Navi is the reconstruction of the map. Also some occupancy rate that tells us where the insight and all that in a fastlane framework, the beam forming map is visible there. At the beginning. It's very, very noisy, but it gets better as it moves around. And then it's very quickly converges to something which is fairly good estimation of the, of the map. So here it's driven by hand. Because the challenge here is that the echoes tell you where is inside, but not what is outside. Okay, so you gotta love that when you see an echo of that is probably that the borders is about a closer than that. But you may have missed the first eight goals, so it's hard to tell. Okay, all those little things which are feasible. So I'll pause the video for time. First one is actually the acoustic wave propagation is very complicated. It's like if you were in air, the sun is moving at the same speed, independent of pitch. In metal. If you, if you go to higher frequencies and typically goes faster, faster. So you have some models that can be tune the dependence thickness depend on material. And what we have here is a bit of machine learning that actually tried to optimize the propagation model as a stem runs and get a better estimate of the of the reconstruction by estimating the propagation speed for a given frequencies. So without knowing the material, without knowing the sickness. And what you see at the bottom. That's when you have a tiny reflector and on the plate. So in red, now in blue, you have a defect free signal. In red you have the same signal, but with the echoes from the defect. You don't really see them because they are very similar to the slight tiny variation due to the reflector. So if you build the slam map, you get what you have here. So that's an occupancy map, let's say. But once you know where the edges are and you can estimate the geometry, you can remove that from signal. And what you see on the right is basically an estimation of where the refectories okay. And the difference between the, I mean, it's free tiny. These are simulation data. So it's probably feasible to do it. We haven't drilled on or we haven't drilled on a plate yet because every time when you change the setup, we need to drill again. So it's a bit painful. Okay. I guess nobody's seen this type of stuff before. But yeah, there's quite an interesting staffing which are the main application and we are, if we have a Facebook engineer also working with us on that, you can have the same problem. We try to map the room to actually cancel the equals from the room or can do noise cancellation. You could have tried to reconstruct. Now, here we have a moving object, so it's a slam problem. If you don't have a moving object becomes a bit harder to estimate the geometry of the room. Now a bit more image processing, still around natural environment. So we have this lake I showed you earlier. The thing that helps us once learn it that for three years, four years we've been going around the same lake every two weeks and collecting images. So we have around 6 million images collected every two weeks. Sem environment. We with Fogg's snow, very strong sun, very low sun and all that. Because there's 16 thousand images pairs that we've collected, we've made an image location but benchmark with the challenge of trying to identify which images are the same or not. If you want to train a network, to train something, to learn, to recognize a place of a season. This is something we can use. One thing we found. Well, she just to give you a few examples. So these are same place. So fairly difficult. You could also reflectional water channel of change of foliage. Slideshow viewpoints. Sometimes the water move that one meter in height, but also some viewpoint changes. Very challenging. But one thing we found is that semantics mutation is fairly robust to most of these changes. And what's interesting is that most of the contour of semantics mutation are staying the same independently of the season. So we looked at semantic edges and then we built actually, well, one thing is, this is interesting. The hardest problem we have all the time. This sunglass. Very painful because when you move the stay at the same place. Very well and he has very hard to handle actually. But maybe you could just train network to remove them or Gan to, again, to inpaint them. Be interesting. So in the end we got some, some results on recognition based on getting some geometry between the profile of the contour. So the semantic edges and some elements of the appearance. Voice, monocular vision, very hard. So we went to 3D. 3d is better, it's easier. So we built a little backpack, our 3D mapping backpack in. Basically, the goal is to move to Matt is complicated environment. If you want to build a robot that can move in there, either you can buy a cheetah. I can buy one of these quadrupeds, but they're expensive. Or it can find a few sandwiches and a PhD student and a backpack. Way more efficient. And so what we have here, the backpack, we have this progress and slider. It's not used very much health. Our sphere field of view, 50 meter range, lot of points in 3D, and three cameras. And what we did. So here that's a semantic maps. So we have a very good extrinsic because we 3D printed a lot of it. So we can project the. Pixels to the point cloud and get color point cloud. And if we run the point clouds for semantic segmentation, we can also project segmentation to the point cloud. So in the end we have 3D semantic point cloud that we can build. And it's linked with localization with IMU. So what you see here is the accumulated point cloud of the time. And that was GPS and INS system on top of it. So that's where just the role INS data will give us this kind of precision. So in the end, maybe half a meter of precision over the year of the scale of one kilometer. So can get nice 3D reconstruction. You can fly so that they can, we go suppose graph, ICP, we get a nice 3D reconstruction. And then we go for multi season. So for one year, every month, we went through the same environment, same pass collecting the data from January with all the snow there. And yeah, every month all the data. So we got all these things. And if someone's interested in this kind of data set with no 3D reconstruction, place recognition, season invariant features. Well, it's kind of thing. So data set is or will be available but is to share if you had a few terabytes, we can share it easily. There's also some, some human, human changes and some places some construction happens. So if you want to work on change detection, that's there as well. Okay, That wasn't mapping part. Now, I'll finish with some work on Applying Conditional perception to environment science as with two main topics and then remain miscellaneous stuff. So first ones, diatoms. Diatoms, the small microorganism. You actually drink some water. Even in pure water, you always have some of them. They are super tiny. You can't really get rid of all of them. It doesn't matter. They are completely harmless. Small algae. You see them with a microscope. These are microscopic images. And there are many, many different species called an taxa. So many different species, they all very similar. But it's important to identify them and to count them because it's part of, of indices are defined the quality of the environment. So we work on the detection pipeline mostly to start with typical detection machine learning problem than bit of classification or the detection side. It's painful to labels. So we work with synthetic images. We have some individual samples have been pre-selected, so we create some random, some artificial slides and build on that. And as you would expect, but actually, it's interesting for biology point of view. If you do just training on static and deep and deploy, you got some good results. If you just train from scratch with only a few images, you get some results are okay. But if you do one step after the other, I mean not surprise. You get very good results and detection. We've recently been moving to YOLO V5. If some of you have been using that for rotated donning bucks, regrade works out of, out of the box. We're trying different solution for rotated downing box. And that's really, it lies life changer. Very important for us because for the biologists, they want to get size of individuals. And if you get this type of axis-aligned bounding box is not that useful for size. So the rotated money was a really nice, again, using the pre-trained detection. Well, the synthetic data set helps a lot. Another thing for classification, that's fun. It's actually a problem you don't have very much in most of the standard data set. The diatoms are very similar and the same time very different. So what you see on the left are three different species. The number of lines on top is a bit different. The tapering at the end is a bit different. Company, different species. What you see on the right, it's, it's the same species but seem from a different point of view. Get huge interact, inter-class similarity and huge interclass, very intraclass variance. So what we worked with welfare, the inter-class similarity, that's what you use triplet loss for. So that common what we did for the intro in intraclass variance is we introduce clustering into the learning. So do pre-learned cluster. The features from your networking for given class. With x means, for instance, identify if you can separate them out, create some virtual classes, retrain, iterate, and then in the end, we don't get a huge boost in accuracy. But to get a lot, maybe it's out of the 160 species is 20 more that we can classify without any arrows. And for us, it's a good improvement. That's, that's one of the metric which is important for us in our detection. But there is a clustering in this specific set where you got this huge inter-class similarity and intraclass variance. That was actually fairly easy solution to get. Good classification. Also very unbalanced datasets. So, but that's if you work with real data that opens, that happens a lot. Another thing. So Franks, you've seen, you've seen some of these images on 3D reconstruction and wood. So here we work with, we tried to predict the interior of wood products. Could woods here, tried to guess what's inside the wood from the shape on the outside. If you own a sawmill, you can enrich, you can buy 5 million extra machine for trees. The trees we go through, you get some kind of a map of the interior. Then decide how to cut it. Because every place you can cut without notes will be worth 10 times a place we can get with nodes. But if you wait to have cut before decided that then you've lost value. So you need to get a care plan, a plan while some kind of plan for cutting? I don't know. Let's say that. What we tried to do is to see if we can predict interior from the outside appearance, from the geometry. So different ways to do that. One way is to do it in the field and how to be a reconstruction. So in this case, we use this Charles stabilized camera. This small Osmo is €400 or dollars, I don't know. Very cool, Very great images. And because the Charles stabilized, you can easily process them. Are they? And then a bit of machine learning to extract the foreground and the bark. And then you get a big data set of images which just look at a tree and you can reconstruct. This is a 3D model of trees. Okay, Pretty neat, very high precision. One thing I'm not discussing here, we train a GAN to generate trees, artificial trees. And their looks really cool. We tried to go to forestry expert, asked them to distinguish between the synthetic trees and the real trees and they couldn't make the difference. So that maybe something for us a generation in games. So I'm very good and very quickly in there. So let's just pause that for a second. So what you see on the left is a 3D structure of a piece of tree. You can see a little bit, you can see where they are bumps, whereas our skulls from old branches, this is a scar, for instance. It says fault binary pine trees are fairly easy structure. We try to exploit that for all the things. But for pine trees, it's somewhat easy. What you see on the right. It's got three sing there in red. It's the x-ray of interior of a tree. So the tree goes through the x-ray, you get the reflect, the reflectance from x-ray in white is at a given space, the border of the tree. So geometry of layer of the tree. So if you integrate a white surface, you get to left video. And in yellow are the predicted nots, whereas the nodes are given the 3D. I guess so. It's transparent. So you see the notes below it. So we go along the length of the tree. Basically, we see it going through the X-ray. So, and you can see when there is some predictions, also some Monte-Carlo Dropout to get someone's some uncertainty estimation. So what you see here that we are predicting whereas in knots are thrust from the geometry. So at least for pine trees, it looks, it's feasible. You have that on the right here are in green I think are the real nuts and in red are predicting one does not offset is mostly due to two alignment of the LSTM that it's doing that. But actually is EGF visualizations. We left the offset, but there's a slight offset due to the LSTM itself. So yeah, for pine trees, okay, now we're trying to go for real this huge trees and it's much harder, OK. beach. Much harder to predict because they are not so structured inside. That's a bit of machine learning tools we're doing. And I'll finish waves my miscellaneous thing. First. Some AI, ai for biodiversity. That's something we're just starting trying to see if you can use AI to quantify biodiversity. So what you see here, plantains, just to see the diversity of classes. There are data sets around out there with plankton, with billions of images, sounds, some labelled as well. So it's quite interesting stuff to do there. I've put some weird publication title because I saw something. So I think that's my favorite. I think it's because Brittany, the title is due Southern elephant seals behave like was a boy. They do actually worked on IMUs IMU data from elephant seals. You may note probably not, that most of the data for the weather prediction on the Southern Ocean and the Arctic Ocean is actually collected by elephant seals with sensors collecting data and sending it twice a day to buy satellites to get the depth of water. Because elephant seals are basically just going up and down all the time. That's true. Like 30 minutes, dive, hunt, come back, brace, tremendous. Dive. 24 hours a day, but never stops. So every six, every 12 hours or send data by satellites and you get all the water column and the attempt tree or the old water columns and all the fields. I'm quite proud also on the one on June of antibiotic, antimicrobial chemotherapy on using computer vision to, to help track the effect of the genes involved in the dissemination of antibiotic resistance. That's pretty cool. Mostly comp division, simple comp division, that a big impact. And, and also with stuff, yeah, detecting chocolate, chocolate platforms from the 20th century, when, 19th century through LIDAR images, using a bit of deep learning as well. And some special problems on weird stuff. See, there's some teas detection on the, well, actually a bit more than Ts detection, but start with the detection on the x-ray of the lower jaws. Some bird tags detection and readings. You don't see that very much, but it's a 2D here. You have some underpass it legally when you do a road through wetland in France, you have to create some underpass to avoid crushing the amphibians. And if you create that, you also have to monitor it. So the CSS, these people putting cameras in the underpass and counting towards right now as a central theme for this year seasons. And it has 300000 images that they have to go through one by one. They're not happy about data. We try to see if we can do it with a detection system. These here are roots. Get a transparent pipe in the US, a camera in the pipe, and you see how the roots are growing. It's a bit like watching the grass grow very slow. But again, here we have habits, hasn't images hand labelled by technicians with withdrawn because they've drawn all the roots. And so the goal is, can we use segmentation for that? How can we reconstruct the network structures of roots? How can we deal with very thin, tiny objects? What changing actually, even for segmentation networks. These are shrimps, which are good indication of the presence of pollutants in water. The activities is linked to the presence of pollutants. So again, mostly tracking, detection. Kalman filters, typical things. And the one on the bottom here is for, it's actually a geyser prediction using bit of AI to predict got time between eruptions. I think that's old faithful in the which park? Yellowstone. Which Park as the water geysers. Such Yosemite here. So yes. So we try to we got data from the renderers and why the process in France, because one of the MSDS students when fall semester in France and wanted to do that, so yeah, Sure. Why not? So it looks like a mess. But in principle, we have a lot of applications, but on only a few core common techniques that we use everywhere. Machine vision, machine learning, optimization, or that, that our own, you use and we use some different contexts, has a bit of a gap it with robotics. But still the same techniques. Some of the techniques get reused. Rl is also going to help as well. So I'd like to finish with something on the collaboration and partnership opportunities. If you want to come to GTL, easy, even come from a semester. It's your faculty students, raise properties, will welcome. So the detail office here on the campus and they can help you with all the visa stuff. It's super easy to do Onto mister or more. We have a lot of data happy to share that. We be happy to do joint experiments, especially for outdoor field deployments. We have the tools for it, I think now. Masterpieces, if you want to do one semester here, one semester, they're also pretty cool. And we I think it's out of state to its in-state tuition at a side side note. And I think that's all readings first, first thing tomorrow. This very robotic. At the end of the day, there'll be snacks and demos out there. And there'll be at least a mobile manipulator, any humanoid called digit roaming around, I think should be cool map masons giving the keynote. Second thing. They have better experimental robots that we do in terms of fuel robots. If you're a field robotics person, you wouldn't benefit from spending time Enkidu around. And if you're a professor who might like to go teach a class in France, That's easy to do. Professor has done it and tell you all the benefits and the wonders of doing such thing. Don't think Cedric is here until Friday to Friday. My schedule is talk to him you can accessible and usable now? Yes. Okay. So when you work with a acoustics part, knew the transducer? Yeah. So suppose you said needs to be coupled. I see I have an expert here because otherwise the sun it otherwise you have a layer of air between the transducer and the endosteal. So in this systems they actually inject water. So you got a small water chamber which is in plastic, touching the steel. The transducer is half a meter away from the steel. And you inject water to fill this space with water. And the in, the impedance of water is good enough so it transmits the waves to the, to the metal blade. Iv at a 100 kilohertz is not very good medium, but for one millimeter it's deficient. Yes. So we mostly work with the just the middle as the cause he's played a neutrality where you're going, what see, what application you're thinking about. A market where if you have multiple layers or if you have a composite material, it really depends on the material properties. That's an that's an ME question. You will need to talk to. Okay. There's an acoustic guy here, NAMI, which is really good at that. It's harder in composite material because they are dispersive. Waves attenuate. We still doing that limit. The, I'm not considering that because from a robotics point of view as you need to be able to move on surface. So some people working on robotic system for applying that on aircraft wings. Different solutions. Either you it on the, on the upper part, or you have some vacuum based at the adherence mechanism or some Venturi effect. Different solutions. But it gets harder for composite. It's but it's an ME question. It's mostly as any in the sense that you have to know how your waves propagating. You can't make magical thing is a waves don't propagate it on, propagate, attenuate, attenuates. We won't see as far here in metal, metal rings very far, so we can send away very far also, but the energy still dissipates. So, and now the question side question is, can we worked with something else and a square? Yes, it can, we can, as long as it's convex is possible to estimate the geometry. We, with non-convex structures becomes a bit too complicated. Yes. That's the main application. So the main problems we try to detect Trojans and corrosion is pesky removal of material. Actually in a real ship, the corrosion happened in the inside part. So you scan from the outside and the caution up and on the other side. So that's a way you measure sickness. So nobody was way it's done is that you move at different places and you do point measurement. So it's obviously quite imperfect. We hope to get something better with wave propagating is actually quite challenging because there's many things are interacting to welds, are interacting, the stiffness behind the plates also interacting. We're working now on detecting the stiffness. And it looks like once we get the geometry from the edges, we can remove that and we can detect the stiffness in some conditions. But they're very much, very much okay. Yes. You're right. But that's you mostly do that if you want to go into sickness measurement with a weight here, we're interested in propagating a long way. And that doesn't work very well with composite. Even if you change the frequency is just to absorbing. Yes. Yeah. I have no idea, honestly, but this, this German guys want to do that. Say, yeah, they want to use GAN from planning. I have no idea how that works and what to wear, but I think the outputs, the galley this case would be Graph or trees. Yeah, I have no idea. I can answer that. Are on our side. We assume that if we get them, plan will instantiate it based on the scene understanding where we can detect the different objects. It's mostly an industry toward industrial demonstrations. Yep. Sorry, contents of that. Our UAV or UV's, they are part of the, of the bug right to project because it's a European scale project. But also partners are dealing with the UV and UAVs. So yeah, I don't know. Do you have any things like nationwide programs like that in us? The Jews institutions are reserved or not taking so much. Okay. Yeah, I think that's the same kind of spirits we have. Like he has industry, industry, it hope to get new product. And in summary, some respects they have like for instance, the company building the UV or the underwater systems. They get a new product out. They could develop a new product which is more autonomous, better sensors, better perception of small arms, and so on. And they get it's already on the market. Okay.