Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 311
  • Item
    Uncertainty-Based Methodology for the Development of Space Domain Awareness Architectures in Three-Body Regimes
    (Georgia Institute of Technology, 2024-04-29) Gilmartin, Matthew Lane
    The past decade has seen a massive growth in interest in lunar space exploration. An increase in global competition has led a growing number of countries and non-governmental organizations towards lunar space exploration as a means to demonstrate their industrial and technological capabilities. This increase in cislunar space activity and resulting increase congestion and conjunction events poses a significant safety impact to spacecraft on or around the moon. This risk was demonstrated on October 18th 2021 when India’s Chandrayaan 2 orbiter was forced to maneuver to avoid a collision with NASA’s Lunar Reconnaissance Orbiter. In order to mitigate the safety impacts of increased congestion, enhanced space traffic management capabilities are needed in the cislunar regime. One foundational component of space traffic management is space domain awareness (SDA). Current SDA infrastructure, a network of earth-based and space-based sensors, was designed to track objects in near-earth orbits, and is not suitable for tracking objects in distant, non-Keplerian cislunar orbits. As a result, new infrastructure is needed to fill this capability gap. The cislunar regime presents a number of challenges and constraints that complicate the SDA architecture design space. Unlike the near-earth regime, cislunar space is a three-body environment, violating many of the simplifying assumptions and models that are used in the near-earth domain. Furthermore, instability in cislunar dynamics means that state uncertainty plays a much more dominant role in system performance. This research identified three technology gaps exposed by the transition to the cislunar regime, that impede the ability of designers to explore the design space and perform many-query analyses, such as design optimization. A new uncertainty-based methodology was then proposed to both address these gaps and enhance design space exploration. The first technology gap identified was a reliance on three-body dynamics violate analytic two-body models of spacecraft motion, meaning that cislunar trajectories must be numerically integrated at much greater computational cost. A method was proposed that combines surrogate modeling techniques with and orbit family approach to develop an analytic parametric model of spacecraft motion. An experiment was carried out in order to interrogate the efficacy of this approach. Multiple surrogate models were generated using the approach, and each was compared to the state-of-the-art numerical integration approach. The surrogate modeling approach was found to greatly reduce the computational cost required to determine the initial state of an arbitrary periodic cislunar trajectory, while maintaining comparable accuracy to existing full-order methods. Of the surrogate model formulations tested, the interpolation methods were found to have the best combination of accuracy and speed for the proposed application. The second technology gap identified was a reliance on Gaussian distributions in most tracking filter implementations. In non-linear domains such as the cislunar regime Gaussian distributions may deviate from a Gaussian shape when propagated through the system's non-linear dynamics. This creates convergence issues that limit the robustness of tracking schemes that rely on Gaussian characterizations of uncertainty. This in turn creates a need to characterize the realism of Gaussian uncertainty approximations of potentially non-Gaussian uncertainty distributions. The characterization of uncertainty realism was identified to be a computationally intensive process, limiting the breadth of potential design space exploration. To ameliorate this issue a surrogate modeling process was proposed for the development of models to characterize the realism of uncertainty estimates produced by tracking filters. An experiment was executed to evaluate the efficacy of this approach. The surrogate modeling process was found to greatly improve the computational cost of the full-order analysis. While the surrogate models were found to have non-negligible errors, these errors were on the same order of magnitude as the variability of the full-order model. Of the models tested, the model based on boosted decision trees was found to have the best balance of speed and accuracy. This massive increase in computational efficiency enables designers to evaluate much larger volumes of design cases using the same hardware. The third identified technology gap was the exponential increases in the computational cost required to evaluate tracking uncertainty using full-order cislunar SDA simulations, as the number and diversity of systems in an SDA system increases. As a result of this ballooning computational cost, detailed uncertainty quantification can rapidly become intractable in a many-query analysis context, limiting the scope of design space exploration and uncertainty quantification. A surrogate modeling method was proposed to provide a volumetric assessment of tracking performance at reduce the computational cost compared to existing methods. As part of this proposed approach, changes in tracking uncertainty were evaluated with respect to the search volume. Changes in uncertainty were evaluated using a novel equivalent radius metric to estimate the rate of information gain of information gain for individual sensor systems which is then aggregated for the overall architecture. As part of this approach, field surrogates and reduced order models were investigated as potential techniques to improve the computational cost and quality of the generated surrogate models. An experiment was performed to investigate the efficacy of the proposed method in comparison to the existing methods. The generated surrogate models were found to significantly reduce the computational cost of the tracking analysis. Furthermore, this experiment found scalar surrogate models to provide the most accurate modeling of the full-order models. The field surrogates generally under-performed their scalar counterparts in terms of goodness-of-fit. Of the models tested, the scalar boosted decision tree model was found to have the best balance of speed and accuracy. In practice, this model offered was able to reduce the computational cost of evaluating SDA architecture tracking performance by several orders of magnitude, enabling designers to increase the breadth of design space exploration by similar orders of magnitude. Finally, each of the developed modeling approaches were integrated into a unified methodology, named VENATOR, to evaluate \gls{SDA} architectures. A demonstration experiment was proposed, wherein the proposed VENATOR uncertainty-based methodology was compared to a state-of-the-art methodology using equivalent full-order analyses. The experiment was broken into two phases. In the first phase, both frameworks were used to evaluate the same architecture. Next, in the second phase, the VENATOR uncertainty-based methodology was used to evaluate a simple optimization problem. The first phase of this analysis found the VENATOR uncertainty-based methodology to offer an improvement in computational cost of over three orders of magnitude. During the second phase, a simple optimization was run using the VENATOR uncertainty-based methodology, evaluating over 82,000 cases in a total of 1.6 days. A short design space exploration was carried out, identifying the Pareto front of non-dominated cases, to demonstrate the utility of this approach. Using the run time of the state-of-the-art system when evaluating a single architecture, it was estimated that using this reference methodology would have taken over 14 years to evaluate the same number of cases using the same hardware. This massive increase in computational efficiency allows designers to greatly increase the breadth of design space exploration, enabling them to examine far larger case loads, reducing design risk and increasing design knowledge. For this reason, the uncertainty-based methodology was deemed to be a significant improvement over the state-of-the-art methodologies.
  • Item
    A Multi-Objective Deep Learning Methodology for Morphing Wings
    (Georgia Institute of Technology, 2024-04-27) Achour, Gabriel
    Due to design constraints, conventional aircraft cannot achieve maximum aerodynamic performance when operating under varying missions and weather conditions. One of these constraints is the traditional approach of optimizing aircraft wings to achieve the best average aerodynamic performance for a specific mission while maintaining structural integrity. Previous studies have shown that changing the shape of wings at different points of a mission profile improves the aerodynamic performance of aircraft. As such, stakeholders have explored the viability and feasibility of changing or morphing the shape of aircraft wings to enable aircraft to adapt to varying missions and weather conditions. However, as with any other aspect of aircraft design, some challenges currently exist that hinder the development of conventional aircraft with morphing wings. First, the computational cost of flow solvers makes aerodynamic shape optimization time-consuming and computationally expensive due to its iterative nature. When designing a morphing wing, different configurations are computed for different points in the flight envelope, multiplying the computational cost necessary for morphing wing aircraft design. Consequently, a framework capable of performing shape optimization at a reduced computational cost is needed. Second, morphing can lead to a high variation of wing shapes, generating high aerodynamic loads and minimizing the aerodynamic benefits of morphing wings. Moreover, structural analyses are also computationally expensive, replicating the same challenges as aerodynamic optimization. As such, a multi-objective framework capable of optimizing morphing wings to increase aerodynamic efficiency while addressing aeroelastic constraints at a lower computational cost is needed. Finally, even though changing the shape of an aircraft’s wing at each segment of a mission profile is the most efficient approach to maximize the benefits of morphing wings, this is not ideal as flight and weather conditions are not constant throughout the flight segment. A framework that can adapt the wing shapes to varying flow conditions during the flight is needed. Consequently, this thesis aims to address these gaps by 1) developing a Conditional Generative Adversarial Network-based algorithm capable of generating optimal wing shapes of a morphing wing vehicle for each segment of a given mission profile, 2) training a Reinforcement Learning agent to modify the optimized shape and design the wing structure to ensure the structural integrity of morphing wings throughout the flight while maintaining a high aerodynamic performance 3) implementing a Meta Reinforcement Learning agent to make aircraft wings adapt their shapes to variations in flow conditions during each mission segment. The experiments outlined in this thesis involve designing each network architecture, collecting the training datasets, and training each model. These models are then applied to various aerodynamic and aero-structural optimization tasks across various demonstrated morphing wing mechanisms. Each model demonstrated accurate optimization results when compared to classical optimization methods. Additionally, the results indicate a significant reduction in computational power required by the deep learning models. As such, this thesis demonstrates the immense benefits of training and implementing deep learning models to perform various optimization tasks related to morphing wing aircraft design at a lower computational cost than traditional optimization algorithms. Furthermore, this thesis demonstrates the benefits of morphing wings throughout flight to maximize aerodynamic efficiency while minimizing structural constraints, which can lead to a non-negligible fuel consumption economy. Finally, this thesis demonstrates how meta-learning can be applied to continuously adapt the shape of a wing to unexpected changes in flow conditions throughout flight.
  • Item
    An Approach for Risk-Informed UAS Mission Planning in Urban Environments to Support First Responders
    (Georgia Institute of Technology, 2024-04-27) Pattison, Jeffrey T.
    The past decade has seen a tremendous increase in the use of Unmanned Aerial Systems (UAS). What was once exclusively used by the military is now a critical component for a wide range of applications, including deliveries and logistics, construction, and law enforcement. Police departments around the world are beginning to see the potential UAS have for responding to emergency events. These UAS can reduce response times, be used to deescalate events, and provide crucial information to ground personnel prior to arrival. However, introducing UAS provides significant difficulties. Human operators and pilots are required to ensure safe operations and regulatory compliance. The Federal Aviation Administration has imposed strict regulations on the use of UAS in populated areas, restricting the autonomous capabilities of UAS. For UAS to be able to operate more autonomously with less human input, additional safety measures and assurance of acceptably safe operations are required. This thesis explores how to incorporate risk assessment into UAS mission planning for emergency response to introduce additional safeguards without significantly sacrificing the UAS response capability. The major areas of research studied in this thesis include UAS risk estimation methods, UAS route planning with risk incorporated, and optimizing a system of UAS to respond to emergencies when risk is considered. Because UAS are relatively new compared to manned aircraft, UAS lack historical flight data required for risk assessment like manned aircraft. A new machine learning model is proposed that can be used for evaluating UAS risk in a more time efficient manner than the physics-based modeling and simulation methods commonly used for risk estimation. Response time is critical for emergency events, and the route a UAS takes to reach the emergency directly affects its ability to respond. This work also studies various route planning methods that can account for UAS risk to find a suitable route planning configuration that meets the demands for using UAS as a first responder. Another critical component to the response time is intelligently selecting UAS launch locations. The performance of Integer Linear Programming, Genetic Algorithms, and a hybrid algorithm are compared to determine the most suitable method for finding the optimal launch locations to minimize response time for a system of three UAS when ground risk is incorporated into the emergency response route planning. Using a software in the loop flight simulator and a vehicle simulation environment, an overarching experiment demonstrates the effectiveness of the proposed approach for incorporating risk into mission planning to see how the proposed approach impacts UAS emergency response.
  • Item
    Methodological Improvements for the Integration of Spacecraft Trajectory Optimization into Conceptual Space Mission Design
    (Georgia Institute of Technology, 2024-04-27) Bender, Theresa Elizabeth
    As humans continue to send spacecraft further into space and explore uncharted territories, the implementation of space mission design becomes of paramount importance. Trajectory design and optimization is a key element of space mission design that provides information on the specific route a vehicle will take, as well as numerical estimates pertaining to fuel consumption and transfer time. Due to the complexity, high computational costs, and long runtimes of high-fidelity trajectory analyses, less accurate methods are typically used. Low-fidelity estimates provide sufficient accuracy for initial analyses; however, they often lack valuable information about the trajectory that is important to consider during the conceptual design phase. The overall objective of this research is to develop methodological improvements for spacecraft trajectory design and optimization that provide increased flexibility and better enable trajectory considerations to be incorporated into conceptual mission design studies. This research proposes a design space exploration-based approach to the integration of trajectory design and optimization into conceptual space mission design. It aims to provide a strong characterization of the design space and understanding of the problem behavior, as well as be better suited for early phase design studies that possess unknown or evolving mission requirements. The first phase of this research introduces a design of experiments and sensitivity analysis into the traditional trajectory design process in order to identify the behaviors, sensitivities, and trends of trajectory optimization problems. A regression-based approach for the selection of initial guesses is proposed in order to perform more efficient design studies and gain additional insight about the relationships between variables. The second phase of this research investigates the integration of additional evaluation criteria, namely robustness and sensitivity analyses, that are often performed independent of the trajectory design problem. A methodology is proposed for their quantification and integration into design space exploration studies so that they may be analyzed and visualized alongside performance-based metrics. The third phase of this research integrates mission design considerations into this parametric environment through the superimposition of constraints onto the design space, which results in a set of feasible trajectories that meets performance, robustness, stability, and mission design requirements and constraints. The overarching methodology is then applied to a cislunar demonstration in order to illuminate how its application results in trade studies between trajectory design and other mission design considerations that are more comprehensive and flexible than the traditional design approach allows.
  • Item
    A Data-driven Methodology for Aircraft Trajectory Analysis to Improve Mid-air Conflict Detection in Terminal Airspace
    (Georgia Institute of Technology, 2024-04-27) Zhang, Wenxin
    A mid-air collision occurs when two aircraft come into contact while airborne. It stands as one of the most devastating accidents in aviation history and remains a pressing safety issue in current flight operations. Presently, Air Traffic Control (ATC) serves as the primary means to ensure safe separation between aircraft and prevent mid-air collisions. This heavily relies on human operators, Air Traffic Controllers (ATCOs), who manage critical tasks under significant workload. The anticipated expansion of aviation, both in terms of traffic volume and diversity, particularly in terminal airspace, presents substantial challenges to the existing ATC system. The workload on ATCOs may exceed their capacity, potentially compromising safety. To address forthcoming aviation demands and maintain high safety standards, ATC is gradually integrating automated systems to assist ATCOs in transitioning from manual to supervisory roles. This dissertation is driven by the need for advanced analytics and automated decision supports concerning air traffic within terminal airspace. By leveraging Global Navigation Satellite System (GNSS) technologies, specifically Automatic Dependent Surveillance–Broadcast (ADS-B), ATC is able to access real-time and extensive historical operational data. Hence, this research presents a novel data-driven methodology to conduct thorough aircraft trajectory analysis, aiming to improve mid-air conflict detection within terminal airspace. The outlined methodology comprises three key steps: (1) traffic flow identification and recognition, (2) trajectory prediction, and (3) conflict detection. The traffic flow identification and recognition step entails two key requirements: (1) an effective method to identify air traffic flows in terminal airspace, and (2) a fast and accurate method to recognize the air traffic flow of individual flights. Achieving the first requirement demands a clustering approach capable of filtering out non-nominal trajectories commonly encountered in daily operations. While Density-Based Spatial Clustering of Applications with Noise (DBSCAN) may be applied, it can struggle with density variations in traffic flows observed in historical trajectories. Thus, Ordering Points to Identify the Clustering Structure (OPTICS) is proposed as an alternative clustering algorithm. Additionally, Weighted Euclidean Distance is suggested as a distance metric to account for the significance of different trajectory points. An experiment is designed to implement the OPTICS and DBSCAN algorithms using Weighted Euclidean Distance as the distance metric to identify air traffic flows in terminal airspace, and the results have demonstrated OPTIC's superior effectiveness in enhancing identification over DBSCAN. Addressing the second requirement involves employing a method capable of multi-class classification with rapid training and high accuracy. Ensemble models such as Random Forest and Extreme Gradient Boosting (XGBoost) provide a favorable balance between accuracy and efficiency, rendering them viable choices. Conversely, the Long Short-Term Memory (LSTM) model is anticipated to yield even higher accuracy, albeit with longer training time. An experiment is designed to implement Random Forest, XGBoost, and LSTM models for multi-class classification of aircraft trajectory segments, aiming to recognize air traffic flows of individual flights. Subsequently, their performance in terms of accuracy and training time is compared. The results of the experiment indicate that Random Forest achieves accuracy levels comparable to LSTM while significantly reducing training times. The trajectory prediction step necessitates a method for aircraft trajectory prediction. Existing methods typically employ an encoder-decoder architecture with LSTM trained on entire trajectory sets, leading to potential challenges: (1) difficulty in effectively learning hidden features due to significant differences in input trajectories, and (2) sequential nature of LSTM resulting in prolonged training durations. To overcome the first challenge, this study proposes to train multiple predictors on subsets with distinct traffic flows identified earlier, rather than a monolithic predictor on the entire dataset. An experiment is devised to implement the encoder-decoder architecture with LSTM to train a monolithic predictor and multiple predictors, on datasets containing all trajectories and subsets with distinct traffic flows respectively, then compare the accuracy and training time of the two approaches. The results have revealed that employing multiple predictors leads to increased accuracy and decreased training time compared to the single predictor approach. To address the second challenge, Transformer is proposed as an alternative to LSTM, benefiting from attention mechanisms to eliminate sequential operations and enable parallelization. An experiment is designed to train trajectory predictors for distinct traffic flows using the encoder-decoder architecture, first with LSTM and then with Transformer, followed by a comparison of the prediction accuracy and training time between the two approaches. The implementation results indicate a considerable reduction in training time and comparable accuracy achieved by Transformer compared to LSTM, particularly for extended prediction horizons. The conflict detection step requires an automated method to identify conflicts within terminal airspace, with a critical focus on addressing uncertainty. Utilizing historical trajectory data is crucial, especially in the context of aircraft position estimation, which conventionally relies solely on mathematical tools without leveraging real-world data. Kernel Density Estimation (KDE), a statistical technique for deriving Probability Density Functions (PDFs) from sampled data, emerges as a promising tool to enable robust estimation of aircraft positions based on historical trajectories. Furthermore, the intersection of PDFs from different flights serves as a means to identify potential conflicts. Hence, a novel Weighted KDE method is proposed to estimate aircraft positions by integrating outputs from traffic recognition and trajectory prediction, subsequently facilitating conflict detection in terminal airspace through the intersection of flight PDFs. To validate the proposed method, an experiment is designed to implement Weighted KDE to synthesize the outcomes of traffic flow recognition and trajectory prediction to estimate aircraft positions and then perform conflict detection by representing conflict with the intersection of aircraft position PDFs. The implementation results reveal that the conflict probabilities calculated by the Weighted KDE method show an inverse relationship with actual distances between aircraft, in both horizontal and vertical planes, thereby demonstrating the effectiveness the proposed conflict detection method. Several representative real flight scenarios serve as use cases to showcase the efficacy of the proposed data-driven methodology for analyzing aircraft trajectories to improve mid-air conflict detection in terminal airspace. The exploratory nature of this research suggests its potential evolution into a real-time decision support tool that offers conflict detection advisories for ATCOs. Transitioning from research to practical application may require real flight tests and incorporation of Real-time Assurance (RTA) mechanisms.
  • Item
    SPAAD: A Systems Design Methodology for Product and Analysis Architecture Decomposition
    (Georgia Institute of Technology, 2024-04-27) Omoarebun, Ehiremen Nathaniel Ogbon
    Increasing complexity in engineering design has resulted from the continuous advancement of technology over the past few decades. Over the years, engineers have explored various ways to manage complexity during the different phases of design and have recently shifted from document-based approaches to model-based approaches in the form of Model-Based Systems Engineering (MBSE). However, MBSE comes with its own set of challenges. Despite the introduction of MBSE many systems engineering practices are still based on heuristics, and engineers rely on prior experiences or trial and error approaches to implement systems engineering methods. Although existing methodologies outline important aspects of the system design process, they do not define or provide guidance on how these aspects should be achieved. Recently, INCOSE, the systems engineering professional society, has sought to establish formal and theoretical methods in system engineering that are grounded in science and mathematics. Using formal and theoretical methods, a system can be represented, and the relationships between its elements can be better understood. Also, in recent years, Integrated Product and Process Development (IPPD) has emerged as a systematic approach to manage the development of complex systems from early integration through a system's life cycle and could be considered the overall construct for system design problems. A fundamental aspect of the IPPD process is the decomposition of the system. With the emergence of MBSE, Requirements, Functional, Logical, and Physical (RFLP) is an important framework used in system decomposition. However, similar to many MBSE approaches, the RFLP framework operates at a high-level and does not provide guidance on decomposing stakeholder requirements into the system's functional, logical, and physical architecture. This led to the motivating question for this dissertation, with the aim to explore ways to improve and effectively translate the decomposition process within the RFLP framework into a system design that satisfies the stakeholder requirements. A research objective was identified with the aim to develop and implement a method that facilitates a rigorous system decomposition process in a more formal and structured manner using a set of theoretical foundations based on mathematical principles to effectively characterize a system. From this research objective, an overarching research question for this dissertation was formulated with the aim to establish structure between the product and analysis architectures during system decomposition to allow for the design of better and improved systems, especially during the conceptual stages of design. To improve the decomposition process and create a structure within the RFLP framework, Axiomatic Design Theory (ADT) was identified as the most suitable method that can aid in the structured decomposition of a system, while placing emphasis on minimizing coupling and improving the system's robustness. An in-depth examination of ADT and its potential integration with the RFLP framework revealed several limitations, which this dissertation addresses across the various research areas. The first research area focuses on improving the requirements process in ADT and RFLP. A requirements analysis process is developed to categorize stakeholder requirements into functional and non-functional requirements, provides a framework to establish the relationships between the different types of requirements, and allows for high-level requirements to be broken down into concrete and clear requirements within the product and analysis architectures. The second research area focuses on integrating concepts from Axiomatic Design Theory (ADT) into the RFLP framework. The Independence axiom from axiomatic design, together with its zigzagging attribute, is used to decompose the functional and logical layers of the RFLP framework and help create a structure during design. The third research area focuses on the identification of suitable analysis methods during system decomposition within the analysis architecture. During conceptual design, the selection of a suitable analysis method may be challenging, especially when model data is limited. The ability to properly identify a suitable analysis method facilitates informed decision-making during system design. From a combination of the three research areas, a ten-step methodology, SPAAD, is proposed that outlines the steps to perform a systems decomposition from the stakeholder requirements to the development of the functional, logical, and physical architectures for both the product and analysis architectures or domains. A test case problem involving the design of a suite of systems to aid in the fight against wildfires in remote locations substantiated the developed methodology.
  • Item
    Volatile molecular species and their role in planetary surface morphology and spacecraft design and performance
    (Georgia Institute of Technology, 2024-04-27) Macias Canizares, Antonio
    The geologic processes that govern the surface morphology of ice-covered, airless (i.e., without atmospheres) bodies in the Solar System have gained increasing interest over the past several decades, both for the scientific questions such worlds present, and for the relevance for future in-situ exploration. Though engineering capabilities for landing spacecraft, such as terrain relative navigation and hazard avoidance, can mitigate against meter and sub-meter scale hazards, it is nevertheless important to understand the morphological evolution and steady state of the surface ice on these worlds. Considerable work has been dedicated to the large- and small-scale geology of these worlds, but much remains to be understood about the centimeter- to meter-scale morphology of these icy, cryogenic (~100 K) surfaces. One specific hypothesis, is that blade-like structures, called penitentes, form on the surface of Europa, and rise up to 15 meters in height, though it has been argued that the physics of penitente formation, as applied for such a hypothesis, does not apply to the exosphere and surface conditions of Europa. Interestingly, penitente-like structures have been observed on Pluto, which does have a significant, albeit seasonal, atmosphere. Penitentes are also predicted to form under certain conditions on Mars though they have yet to be observed. On Earth, penitentes are made from compact snow or ice and achieve quasi-stability in high-altitude, low-latitude regions, as the result of sublimation and melting processes, and importantly, they only occur in regions of net sublimation (or melting) loss of water. Penitentes are erosional features that form as a series of corrugated ridges and troughs that run parallel to the path of the Sun across the sky, and mature structures often yield fields of individual spikes or blades, which bear some resemblance to a pair of hands praying toward the Sun, hence the name ‘penitentes.’ The first sightings of penitentes date back to the era of Darwin, who, on a perhaps anecdotal note during his travel through Chile and La Plata, described the characteristic shape of penitentes as “...pinnacles or columns...,” and hypothesized their formation process: “...the columnar structure must be owing to a ‘metamorphic’ action, and not to a process during deposition.” As it turns out, Darwin was correct since deposition during snowfall is the end of the life cycle for a penitente field on Earth. More importantly, Darwin described in his journal the possible hazards for travel and commerce as he experienced different scenarios during his journey. Darwin attributed the discovery of Penitentes to Scoresby and later to Colonel Jackson. Nevertheless, the true discoverers were the local inhabitants who had already named places like Cerro de los Penitentes (Hill of the Penitents) and Rio de los Penitentes long before Darwin arrived. It should be noted that during the XIX century, these snow structures might not have been locally referred to as penitentes, and even nowadays, the name penitentes is often termed after the physical process causing their formation and not their characteristic shape. Hence, places such as the Cerro de los Penitentes were most likely named after the penitents (repenting people) from the church. To advance our understanding of the surface morphology of airless, ice-covered worlds, and to address the limitations of current models, the work in this dissertation focused on developing numerical models that accurately represent the irradiance and physical evolution of ice on such worlds and used those models to investigate the possible presence of penitentes on Europa and their hazardous implications for a future lander.
  • Item
    Development and Acquisition Modeling for Space Campaign Architecting
    (Georgia Institute of Technology, 2024-04-26) Zhu, Stephanie Y.
    Contemporary and future space exploration endeavors are highly complex. Multiple system deployments and concurrent missions—examined as part of a space campaign—are needed to fulfill long-term goals. Given the large time horizon and the development of new space systems to perform those missions, not all of the enabling technologies have been matured nor are the new systems completely designed, developed, and tested for implementation. The current conceptual phase methodology for Space Campaign Architecting (SCA) does not adequately account for the development and acquisition (D&A) of new systems and their dependent technologies. The D&A of new systems encompasses the time and resources for technology research and development (R&D), as well as subsystems and systems design, development, testing, and evaluation (DDT&E). This timing and resource allocation for D&A must be included when planning the main missions within a space campaign; otherwise, lack of accounting will result in programmatic gaps when architecting a campaign baseline, which affects the feasibility and viability of the constituent mission architectures. This dissertation presents an approach to model and assess systems D&A for conceptual phase SCA. The SCA methodology is formalized to establish a System-of-Systems (SoS) taxonomy of campaign elements, and then an ontology is defined to map campaign elements to the D&A subproblem space to represent high-level programmatic decision-making. A Binary Integer Programming (BIP) optimization method is applied to the resulting D&A subproblem definition to formulate a programmatic assessment that represents multiple SoS levels of a campaign. Two objective formulations are presented; each simulates a separate programmatic decision-making scenario with differing campaign-level goals and high-level interests. The first formulation is analogous to the programmatic goal of achieving a target end state or capability as quickly as possible—the choice of which systems’ D&A to invest in and when should be organized such that the total time required to reach the final mission architecture is minimized. The second formulation models decision-making scenarios where qualitative programmatic goals are quantified, and the choice of system D&A will achieve a mission architecture that best fulfills those goals through the assignation of interest points and maximizing the total points. Both formulations are demonstrated on test campaigns of varying sizes and campaign scenario parameters to investigate the resilience of the presented approach in face of first-order campaign variations. The approach overall is able to represent campaigns within both scenarios of minimize time and maximize value, and both feasible and infeasible results from the optimization are shown to have utility in application. Experimental results provide further application insight into scoping a given campaign’s representation, signaling possible programmatic trade-offs, and the need to undergo proper quantification exercises. The Design Reference Architecture (DRA) 5.0 Nuclear Thermal Propulsion (NTP) mission architecture is the final demonstration of the programmatic assessment to represent a larger campaign and capturing the impact of system D&A.
  • Item
    Robust autonomous navigation framework for exploration in GPS-absent and challenging environment
    (Georgia Institute of Technology, 2024-04-25) Chen, Mengzhen
    The benefits of autonomous systems have attracted the industry's attention during the past decade. Different kinds of autonomous systems have been applied to various fields such as transportation, agriculture, healthcare, etc. Tasks unable or risky to be completed by humans alone can now be handled by autonomous systems efficiently, and the labor cost has been greatly reduced. Among various kinds of tasks that an autonomous system can perform, the capability of an autonomous system to understand its surrounding environment is of great importance. Either using an Unmanned Aircraft System (UAS) for package delivery or self-driving vehicles requires the autonomous system to be more robust during operation under different scenarios. This work will improve the robustness of autonomous systems under challenging and GPS-absent environments. When exploring an unknown environment, if external information such as a GPS signal is unavailable, mapping and localization are equally important and complementary. Therefore, simultaneously creating a map and localizing itself is essential. Under such conditions, Simultaneous Localization and Mapping (SLAM) was created in the robotics community to provide the capability of building a map for the surroundings of an autonomous system and localizing itself during operation. SLAM architecture has been designed for different kinds of sensors and scenarios during the past several decades. Among different SLAM categories, visual SLAM, which uses cameras as the sensors, outperforms others. It has the advantage of extracting rich information from images while other sensors alone are incapable. Since the images captured by the camera are treated as the inputs, therefore, the accuracy of the results will heavily depend on their quality. Most SLAM architecture can easily handle high-quality images or video streams, while poor-quality ones are still challenging. The first challenging scenario that the visual SLAM is facing is the motion blur scenario in which the performance of the visual SLAM will be severely downgraded. The other challenging scenario that the visual SLAM is facing is the low-light environment. Since the poor illumination condition has less information shared with the camera, it also downgrades the accuracy of the visual SLAM system. Furthermore, the visual SLAM adds an extra requirement for computational efficiency since the operation needs to be real-time. Based on these observations, the research objective of this dissertation has been formed which is improving the visual SLAM performance under these two challenging conditions. In this dissertation, three research areas have been defined to achieve the overarching research objective. The first research area focuses on developing the capabilities of recovering these poor-quality images captured under these challenging scenarios in real-time. Two highly efficient deep learning models, a single image deblurring model, and a low-light image enhancement model, have been developed and evaluated in this dissertation. The second research area focuses on the uncertainty quantification for the results generated by the visual SLAM systems. Since some of the visual SLAM systems have nondeterministic behaviors, a statistical approach has been developed in this dissertation to reduce and factor out the uncertainties in the results and provide a quantitative method for performance evaluation. The third research area focuses on creating a visual SLAM validation dataset that can be utilized for testing the performance under motion blur scenarios since the majority of the existing dataset does not have enough blurriness or is limited to the indoor environment. In this dissertation, a synthetic blurry SLAM dataset has been created with the help of utilizing a physics-based virtual simulation environment. From a combination of the three research areas, a visual SLAM framework is proposed and tested with several visual SLAM datasets captured under the two challenging scenarios. Based on the experiment results, for the proposed visual SLAM framework, accuracy improvements have been observed through a statistical approach for all the use cases when compared with the benchmark visual SLAM system. Therefore, the proposed visual SLAM framework in which the image enhancement modules have been added does improve the visual SLAM performance under challenging conditions. Two key contributions have been made through work: The first one is a visual SLAM framework that is designed for tackling real-world challenging conditions such as motion blur and low-light environment. The second one is a novel pipeline that utilizes the physics-based simulation environment to generate a realistic synthetic blurry visual SLAM dataset.
  • Item
    A Methodology for Design Space Exploration of Novel Supersonic Aircraft Using High-Fidelity Aerodynamic Analysis
    (Georgia Institute of Technology, 2024-04-15) Sampaio Felix, Barbara
    The increase in the market share for ultra-high-net-worth individuals prompted research institutions and commercial organizations to design a novel Supersonic Transport (SST). The design of these vehicles follows the traditional aircraft design process, in which Design Space Exploration (DSE) is used to identify the geometries expected to achieve the project requirements. This procedure assesses vehicle performance metrics using a multi-disciplinary approach. For this analysis, a database of vehicle aerodynamic characteristics as a function of operating conditions and geometry design parameters is needed. Due to the need to evaluate thousands of vehicle geometries in a timely manner, lower-fidelity models are commonly used to generate the necessary data in conceptual design. These models are calibrated using historical databases to improve their accuracy. However, the database of commercial supersonic vehicles is sparse. Thus, decisions made using these empirical models present a risk to the system design. For this reason, designers aim to use higher-fidelity numerical simulations for aircraft conceptual design. Nonetheless, the computational resources demanded by these models hinder their use on a large scale. This is a technical challenge in the generation of high-fidelity drag polars used in mission analysis as these datasets contain thousands of entries. Consequently, the literature has not yet developed a methodology to obtain a parametric representation of the aerodynamic tables at the proper level of fidelity and computational allocation suitable for SST DSE in early design. Hence, the goal of this dissertation is to enable the use of the expensive aerodynamic numerical models in conceptual design applications. The combined use of data-fusion and Reduced Order Models (ROM) techniques to approximate the aerodynamic drag polar was studied to accomplish this research objective. The first research area of the present work investigated how to decrease the computational resources needed to generate the drag polar for a fixed vehicle geometry. A study was performed on the use of multiple data sources to approximate drag coefficient as a function of flight conditions typical of SST flight. The computational requirements and model accuracy obtained when different datasets are used to generate this approximation were compared. It was shown that the use of data-fusion methods built with numerical simulations that capture the non-linear flow physics characteristics of SST flight outperforms the single-fidelity approximation of drag coefficient trained with the high-fidelity simulation. The use of data-fusion decreased the computational resources needed to approximate the drag coefficient. The goal of the second research effort of this thesis was to develop a method to emulate drag polars as a function of geometry design parameters based on high-fidelity simulations. These models depend on a detailed description of the vehicle to estimate its aerodynamic characteristics. Due to the curse of dimensionality, this increases the size of the dataset required to train an approximation as a function of geometry design variables. For this reason, work was performed to identify the variables that capture most of the variation on the vehicle performance metrics when a parametric drag polar is used in mission analysis. The use of ROMs to emulate these datasets as a function of the selected parameters was assessed. The results suggest that this technique can approximate these tables with accuracy and computational resources appropriate for conceptual design applications. The findings of both research areas lead to the ROM-based Multi-Fidelity Drag Polar Approximation method developed in this dissertation. This method uses datafusion techniques to approximate the drag coefficient of a fixed aircraft geometry. This surrogate model is used to generate the drag polar for the same vehicle. Using this procedure, a database of the aerodynamic tables can be obtained and used to train a parametric ROM representation of SST drag polars. The use of the developed methodology to predict SST aerodynamic characteristics in conceptual design studies was compared to the state-of-the-art approach for SST design using high-fidelity aerodynamic models. The conventional method had to bypass the mission analysis evaluation for thousands of vehicle geometries due to the large computational resources needed to generate the required aerodynamic datasets. For this reason, this approach developed an aerodynamic optimum vehicle. The ROM-based methodology developed in this thesis leveraged advanced design methods to emulate SST drag polars appropriate for mission analysis. This capability enabled the DSE of SST with respect to vehicle-level metrics and the selection of a performance optimum SST. A comparison between the two approaches on a common SST design study showed that the performance metrics of the vehicle obtained using the ROM-based method surpass the same metrics from the L/D optimum aircraft. The demonstration of the ROM-based methodology for approximation of drag polars suitable for SST conceptual design showcased several advantages of this method. A few of these benefits include the ability to explore SST geometry space, to select multi-disciplinary optimum SSTs, to visualize performance trends with varying vehicle geometry, and to execute early design studies using high-fidelity aerodynamic models. Furthermore, the addition of the ROM-based approximation on a design framework extends the high-fidelity drag polar generation capabilities to other members of the aerospace community. This unlocks the potential to perform pioneering multi-disciplinary exploratory studies using high-fidelity aerodynamic simulations. The ability to use models that capture SST flight physics in exploratory conceptual design ultimately increases designers’ confidence in the results obtained and allows them to make informed decisions, decreasing the risk of SST design projects.