Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 1 of 1
Thumbnail Image
Item

Robust autonomous navigation framework for exploration in GPS-absent and challenging environment

2024-04-25 , Chen, Mengzhen

The benefits of autonomous systems have attracted the industry's attention during the past decade. Different kinds of autonomous systems have been applied to various fields such as transportation, agriculture, healthcare, etc. Tasks unable or risky to be completed by humans alone can now be handled by autonomous systems efficiently, and the labor cost has been greatly reduced. Among various kinds of tasks that an autonomous system can perform, the capability of an autonomous system to understand its surrounding environment is of great importance. Either using an Unmanned Aircraft System (UAS) for package delivery or self-driving vehicles requires the autonomous system to be more robust during operation under different scenarios. This work will improve the robustness of autonomous systems under challenging and GPS-absent environments. When exploring an unknown environment, if external information such as a GPS signal is unavailable, mapping and localization are equally important and complementary. Therefore, simultaneously creating a map and localizing itself is essential. Under such conditions, Simultaneous Localization and Mapping (SLAM) was created in the robotics community to provide the capability of building a map for the surroundings of an autonomous system and localizing itself during operation. SLAM architecture has been designed for different kinds of sensors and scenarios during the past several decades. Among different SLAM categories, visual SLAM, which uses cameras as the sensors, outperforms others. It has the advantage of extracting rich information from images while other sensors alone are incapable. Since the images captured by the camera are treated as the inputs, therefore, the accuracy of the results will heavily depend on their quality. Most SLAM architecture can easily handle high-quality images or video streams, while poor-quality ones are still challenging. The first challenging scenario that the visual SLAM is facing is the motion blur scenario in which the performance of the visual SLAM will be severely downgraded. The other challenging scenario that the visual SLAM is facing is the low-light environment. Since the poor illumination condition has less information shared with the camera, it also downgrades the accuracy of the visual SLAM system. Furthermore, the visual SLAM adds an extra requirement for computational efficiency since the operation needs to be real-time. Based on these observations, the research objective of this dissertation has been formed which is improving the visual SLAM performance under these two challenging conditions. In this dissertation, three research areas have been defined to achieve the overarching research objective. The first research area focuses on developing the capabilities of recovering these poor-quality images captured under these challenging scenarios in real-time. Two highly efficient deep learning models, a single image deblurring model, and a low-light image enhancement model, have been developed and evaluated in this dissertation. The second research area focuses on the uncertainty quantification for the results generated by the visual SLAM systems. Since some of the visual SLAM systems have nondeterministic behaviors, a statistical approach has been developed in this dissertation to reduce and factor out the uncertainties in the results and provide a quantitative method for performance evaluation. The third research area focuses on creating a visual SLAM validation dataset that can be utilized for testing the performance under motion blur scenarios since the majority of the existing dataset does not have enough blurriness or is limited to the indoor environment. In this dissertation, a synthetic blurry SLAM dataset has been created with the help of utilizing a physics-based virtual simulation environment. From a combination of the three research areas, a visual SLAM framework is proposed and tested with several visual SLAM datasets captured under the two challenging scenarios. Based on the experiment results, for the proposed visual SLAM framework, accuracy improvements have been observed through a statistical approach for all the use cases when compared with the benchmark visual SLAM system. Therefore, the proposed visual SLAM framework in which the image enhancement modules have been added does improve the visual SLAM performance under challenging conditions. Two key contributions have been made through work: The first one is a visual SLAM framework that is designed for tackling real-world challenging conditions such as motion blur and low-light environment. The second one is a novel pipeline that utilizes the physics-based simulation environment to generate a realistic synthetic blurry visual SLAM dataset.