Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 2620
  • Item
    Uncertainty-Based Methodology for the Development of Space Domain Awareness Architectures in Three-Body Regimes
    (Georgia Institute of Technology, 2024-04-29) Gilmartin, Matthew Lane
    The past decade has seen a massive growth in interest in lunar space exploration. An increase in global competition has led a growing number of countries and non-governmental organizations towards lunar space exploration as a means to demonstrate their industrial and technological capabilities. This increase in cislunar space activity and resulting increase congestion and conjunction events poses a significant safety impact to spacecraft on or around the moon. This risk was demonstrated on October 18th 2021 when India’s Chandrayaan 2 orbiter was forced to maneuver to avoid a collision with NASA’s Lunar Reconnaissance Orbiter. In order to mitigate the safety impacts of increased congestion, enhanced space traffic management capabilities are needed in the cislunar regime. One foundational component of space traffic management is space domain awareness (SDA). Current SDA infrastructure, a network of earth-based and space-based sensors, was designed to track objects in near-earth orbits, and is not suitable for tracking objects in distant, non-Keplerian cislunar orbits. As a result, new infrastructure is needed to fill this capability gap. The cislunar regime presents a number of challenges and constraints that complicate the SDA architecture design space. Unlike the near-earth regime, cislunar space is a three-body environment, violating many of the simplifying assumptions and models that are used in the near-earth domain. Furthermore, instability in cislunar dynamics means that state uncertainty plays a much more dominant role in system performance. This research identified three technology gaps exposed by the transition to the cislunar regime, that impede the ability of designers to explore the design space and perform many-query analyses, such as design optimization. A new uncertainty-based methodology was then proposed to both address these gaps and enhance design space exploration. The first technology gap identified was a reliance on three-body dynamics violate analytic two-body models of spacecraft motion, meaning that cislunar trajectories must be numerically integrated at much greater computational cost. A method was proposed that combines surrogate modeling techniques with and orbit family approach to develop an analytic parametric model of spacecraft motion. An experiment was carried out in order to interrogate the efficacy of this approach. Multiple surrogate models were generated using the approach, and each was compared to the state-of-the-art numerical integration approach. The surrogate modeling approach was found to greatly reduce the computational cost required to determine the initial state of an arbitrary periodic cislunar trajectory, while maintaining comparable accuracy to existing full-order methods. Of the surrogate model formulations tested, the interpolation methods were found to have the best combination of accuracy and speed for the proposed application. The second technology gap identified was a reliance on Gaussian distributions in most tracking filter implementations. In non-linear domains such as the cislunar regime Gaussian distributions may deviate from a Gaussian shape when propagated through the system's non-linear dynamics. This creates convergence issues that limit the robustness of tracking schemes that rely on Gaussian characterizations of uncertainty. This in turn creates a need to characterize the realism of Gaussian uncertainty approximations of potentially non-Gaussian uncertainty distributions. The characterization of uncertainty realism was identified to be a computationally intensive process, limiting the breadth of potential design space exploration. To ameliorate this issue a surrogate modeling process was proposed for the development of models to characterize the realism of uncertainty estimates produced by tracking filters. An experiment was executed to evaluate the efficacy of this approach. The surrogate modeling process was found to greatly improve the computational cost of the full-order analysis. While the surrogate models were found to have non-negligible errors, these errors were on the same order of magnitude as the variability of the full-order model. Of the models tested, the model based on boosted decision trees was found to have the best balance of speed and accuracy. This massive increase in computational efficiency enables designers to evaluate much larger volumes of design cases using the same hardware. The third identified technology gap was the exponential increases in the computational cost required to evaluate tracking uncertainty using full-order cislunar SDA simulations, as the number and diversity of systems in an SDA system increases. As a result of this ballooning computational cost, detailed uncertainty quantification can rapidly become intractable in a many-query analysis context, limiting the scope of design space exploration and uncertainty quantification. A surrogate modeling method was proposed to provide a volumetric assessment of tracking performance at reduce the computational cost compared to existing methods. As part of this proposed approach, changes in tracking uncertainty were evaluated with respect to the search volume. Changes in uncertainty were evaluated using a novel equivalent radius metric to estimate the rate of information gain of information gain for individual sensor systems which is then aggregated for the overall architecture. As part of this approach, field surrogates and reduced order models were investigated as potential techniques to improve the computational cost and quality of the generated surrogate models. An experiment was performed to investigate the efficacy of the proposed method in comparison to the existing methods. The generated surrogate models were found to significantly reduce the computational cost of the tracking analysis. Furthermore, this experiment found scalar surrogate models to provide the most accurate modeling of the full-order models. The field surrogates generally under-performed their scalar counterparts in terms of goodness-of-fit. Of the models tested, the scalar boosted decision tree model was found to have the best balance of speed and accuracy. In practice, this model offered was able to reduce the computational cost of evaluating SDA architecture tracking performance by several orders of magnitude, enabling designers to increase the breadth of design space exploration by similar orders of magnitude. Finally, each of the developed modeling approaches were integrated into a unified methodology, named VENATOR, to evaluate \gls{SDA} architectures. A demonstration experiment was proposed, wherein the proposed VENATOR uncertainty-based methodology was compared to a state-of-the-art methodology using equivalent full-order analyses. The experiment was broken into two phases. In the first phase, both frameworks were used to evaluate the same architecture. Next, in the second phase, the VENATOR uncertainty-based methodology was used to evaluate a simple optimization problem. The first phase of this analysis found the VENATOR uncertainty-based methodology to offer an improvement in computational cost of over three orders of magnitude. During the second phase, a simple optimization was run using the VENATOR uncertainty-based methodology, evaluating over 82,000 cases in a total of 1.6 days. A short design space exploration was carried out, identifying the Pareto front of non-dominated cases, to demonstrate the utility of this approach. Using the run time of the state-of-the-art system when evaluating a single architecture, it was estimated that using this reference methodology would have taken over 14 years to evaluate the same number of cases using the same hardware. This massive increase in computational efficiency allows designers to greatly increase the breadth of design space exploration, enabling them to examine far larger case loads, reducing design risk and increasing design knowledge. For this reason, the uncertainty-based methodology was deemed to be a significant improvement over the state-of-the-art methodologies.
  • Item
    An Investigation of the Susceptibility and Practical Mitigation of Pitch-Roll Resonance in Fin-Stabilized Liquid Sounding Rockets
    (Georgia Institute of Technology, 2024-04-29) Nagarajan, Rithvik
    Sounding rockets are suborbital vehicles designed to carry scientific payloads and perform experiments in the upper atmosphere. Recently, there has been a focus on reusable liquid sounding rockets to allow faster launch rates and lower costs per mission. Georgia Tech’s Yellow Jacket Space Program aims to contribute to this field by developing a series of liquid rockets with the goal of launching a sub-orbital payload to the Karman line. One of these rockets, Darcy II, experienced a catastrophic anomaly mid-flight. Like other fin-stabilized sounding rockets, Darcy II was designed with a high length-to-diameter ratio for drag optimization. This made the craft susceptible to roll-yaw resonance, where the vehicle spins close to the pitch natural frequency. Previous literature has shown roll-resonant vehicles can exhibit abnormal rolling and yawing motion beyond predictions by linear theory. Referred to as roll lock-in and catastrophic yaw, respectively, these effects can cause an excessive angle of attack and induce high structural loads. This thesis investigates the susceptibility of liquid sounding rockets to roll resonance, using the Darcy-Series rockets as case studies. Drawing from previous literature on roll resonance dynamics, additions are made to a 6DOF numerical simulation – integrating fluid models, configurational asymmetries, and non-linear aerodynamics with Monte Carlo variables. A sensitivity analysis on model components highlights characteristics of liquid rockets that influence roll resonance. This research examines the contribution of roll resonance to the Darcy II anomaly and through this, validates the numerical simulation. Subsequently, a Monte Carlo simulation is established as a practical method to assess the susceptibility of future liquid sounding rocket designs to the roll resonance phenomenon. This method is applied to the Darcy Space design, revealing a high susceptibility to roll resonance. Mitigation strategies are presented by analyzing the effect of fin design and configurational asymmetries on simulation outputs. Additionally, a simple roll control scheme is designed that takes advantage of existing liquid rocket infrastructure. Four attitude control thrusters are fired once in pairs, implementing a bang-bang roll control scheme designed to prevent roll lock-in using minimal amounts of propellant. This research evaluates the effectiveness of this control system in mitigating roll resonance issues.
  • Item
    Development of an autonomous surveying vehicle for underground lunar environments
    (Georgia Institute of Technology, 2024-04-29) Jagdish, Nikita
    With impending plans for establishing the first long-term lunar base camp, there is a need to find sustainable habitation sites on the Moon. Discovered in 2009, underground lunar lava tubes have shown potential as future habitation sites and have been proposed for devoted exploratory missions. These underground environments could provide protection from the drastic changes in temperature, radiation, and other extreme conditions on the Moon. However, they have only been observed by lunar orbiters and little is known about their internal structure or suitability for habitable structures. Various on-ground robotic systems have been proposed to do this initial survey, but ground vehicles have a high risk of being immobilized in the event of rough terrain. This project aimed to begin the development of an Autonomous Surveying Vehicle (ASV) as a candidate to explore these lava tubes. The ASV will feature a self-contained, refillable propulsion system that provides full mobility, allowing the vehicle to explore the lava tubes with high agility and multiple short-span surveying missions. The propulsion system will utilize an inert cold gas as its propellant to preserve the natural environment and avoid contamination of any potential resources in the lava tubes. The vehicle will also be equipped with on-board sensors, such as inertial sensors and LiDAR, and an autonomous navigation system to simultaneously map and traverse the tubes. The ASV will be compact and inexpensive compared to other proposed systems, putting forth a simpler option for an initial survey of the tubes to determine whether a more extensive exploratory mission is warranted. The vehicle will also be applicable for other surveying missions, such as above-ground environments that are inaccessible or hazardous for rovers and humans. This thesis outlines the mission goals and requirements and begins the development of a prototype cold gas propulsion system for the ASV.
  • Item
    Autonomous and Robust Monocular Simultaneous Localization and Mapping-Based Navigation for Robotic Operations in Space
    (Georgia Institute of Technology, 2024-04-27) Dor, Mehregan
    The theoretical background, the synthesis, and the implementation details of estimation frameworks for target-relative spacecraft rendezvous and proximity operations (RPO) and small body probing and surveying (SBPS) predicated on modern simultaneous localization and mapping (SLAM) are considered. The challenges arising in the application of pure visual monocular SLAM to spacecraft relative navigation by testing an off-the-shelf algorithm, ORB-SLAM, on real satellite servicing image sequences, were identified. It is additionally determined that the inclusion of inertial measurement unit-based (IMU) factors, predominantly used in visual-inertial simultaneous localization and mapping (viSLAM), may not provide observability of the ambiguous scale or of the inertial motion over extended arcs, and moreover would not facilitate the smoothing problem. A comprehensive SLAM framework, predicated on monocular image feature point tracking and sensor fusion for on-the-fly navigation and map building is proposed. The work is contrasted to the state-of-the-art methods which instead exploit stereo imaging. A factor graph approach, allowing for the incorporation of asynchronous measurements of diverse modalities, and the inclusion of kinematic and dynamic constraints, is selected. A new relative dynamics factor predicated on the chaser-target relative orbital mechanics is devised and then augmented with the existing relative kinematics factor of Setterfield et al. to account for non-inertial motion of the target center of mass. AstroSLAM, an algorithm solving for the navigation solution of a spacecraft under motion in the vicinity of a small body by exploiting monocular SLAM, sensor fusion, and RelDyn motion factors, is proposed. The developed motion factor encodes a hybrid inertial rate gyro sensor model and vehicle dynamics model, based on the spacecraft-small-body-Sun system, incorporating realistic perturbing effects, which affect the motion of the spacecraft in a non-negligible manner. The RelDyn factor is readily specialized to the spacecraft rendezvous problem by removing the target gravitational pull variable. The data shows that RelDyn out-performs the state-of-the-art preintegrated IMU accelerometer factors, commonly used in visual-inertial SLAM solutions, in one instance of a legacy NASA small body surveying mission and in one instance of an in-lab-generated dataset. On-the-fly target dynamical parameter estimation, such as the center of mass location, the spin vector, and the gravity parameter, is also demonstrated. An existing robotics procedure, dubbed structure from small-motion (SfSM), is leveraged to tackle the challenge of map initialization with small camera baseline and weak-perspective projection
  • Item
    Permuted proper orthogonal decomposition for analysis of advecting structures
    (Georgia Institute of Technology, 2024-04-27) Ek, Hanna Maria
    This work is motivated by the large and ever-increasing amounts of data from studies in experimental and computational fluid dynamics, and the desire to extract and analyze coherent structures from such datasets. Specifically, this thesis is concerned with vortex patterns in turbulent shear flows, which appear as advecting structures in planar measurements or slices through three-dimensional computational domains. Space-only proper orthogonal decomposition (POD) is one of the most widely used techniques for the analysis of coherent structures and decomposes mean-subtracted data into the space-time separated form q^' (x,t)=∑_j =〖a_j (t) ϕ_j (x) 〗. This method is optimal in the spatial inner product and targets high energy spatial structures, but it is sensitive to input data alignment and cannot effectively handle translations. This work applies a re-orientation of the space-time coordinates in the POD framework, and the modified POD method, referred to as permuted POD (PPOD), is the focus of this thesis. PPOD decomposes data as q^' (x,t)=∑_j =〖a_j (n) ϕ_j (s,t) 〗, where x=(s,n) is a general spatial coordinate system, s is the coordinate along the bulk advection direction in curvilinear space, and n=(n_1,n_2 ) are the mutually-orthogonal directions normal to s. PPOD is optimal in the s,t inner product and, thus, targets advecting structures via their s,t correlations. Specifically, the PPOD modes, ϕ_j (s,t), portray advection as diagonal features in s,t space, where the slope of the features corresponds to the phase speed. Hence, these speeds are a natural output of the decomposition and can vary in an arbitrary and dispersive manner along the s coordinate. Generally, the PPOD modes have arbitrary s,t dependences, and a single mode can describe a broadband or multi-frequency disturbance, as well as time-varying characteristics, such as transient and intermittent dynamics. Additionally, one- and two-dimensional Fourier transforms of the PPOD modes provide useful alternative ways to portray the modal characteristics. For example, the wavenumber-frequency spectrum provides a compact visualization of disturbance advection velocity or dispersion. The PPOD properties are considered through the analysis of data from three high Reynolds number advection-dominated flows: an acoustically forced reacting wake, a swirling annular jet, and a jet in cross flow (JICF), and the results are compared with those from space-only POD. In the wake and swirling jet cases, the leading PPOD and space-only POD modes focus on similar features: advecting shear layer structures. However, low-rank approximations of the wake flow, which is characterized by a broad range of spectral and wavenumber content, show clear differences in the methods’ ability to capture the spatial and temporal information. For equal low-rank approximations, space-only POD provides higher-fidelity spatial reconstructions, while PPOD provides higher-order frequency content. In contrast, the leading PPOD and space-only POD modes for the JICF datasets capture different types of flow structures: advecting shear layer vortices (SLVs) and bulk jet flapping, respectively, while the SLVs are spread over lower energy modes in the case of space-only POD. This shows that the s,t inner product allows the PPOD method to directly target the SLVs, despite them containing a smaller fraction of the energy compared to the jet flapping. Additionally, the leading PPOD mode captures key characteristics of the SLV dynamics for each of the JICF cases, including those typical of convectively and globally unstable JICF, as well as intermittent characteristics and minor time-dependent differences or shifts in the dynamics. On the other hand, higher-order space-only POD approximations are required for comparable descriptions of these dynamics, and the rank depends on the operating conditions and stability characteristics of the JICF.
  • Item
    SPAAD: A Systems Design Methodology for Product and Analysis Architecture Decomposition
    (Georgia Institute of Technology, 2024-04-27) Omoarebun, Ehiremen Nathaniel Ogbon
    Increasing complexity in engineering design has resulted from the continuous advancement of technology over the past few decades. Over the years, engineers have explored various ways to manage complexity during the different phases of design and have recently shifted from document-based approaches to model-based approaches in the form of Model-Based Systems Engineering (MBSE). However, MBSE comes with its own set of challenges. Despite the introduction of MBSE many systems engineering practices are still based on heuristics, and engineers rely on prior experiences or trial and error approaches to implement systems engineering methods. Although existing methodologies outline important aspects of the system design process, they do not define or provide guidance on how these aspects should be achieved. Recently, INCOSE, the systems engineering professional society, has sought to establish formal and theoretical methods in system engineering that are grounded in science and mathematics. Using formal and theoretical methods, a system can be represented, and the relationships between its elements can be better understood. Also, in recent years, Integrated Product and Process Development (IPPD) has emerged as a systematic approach to manage the development of complex systems from early integration through a system's life cycle and could be considered the overall construct for system design problems. A fundamental aspect of the IPPD process is the decomposition of the system. With the emergence of MBSE, Requirements, Functional, Logical, and Physical (RFLP) is an important framework used in system decomposition. However, similar to many MBSE approaches, the RFLP framework operates at a high-level and does not provide guidance on decomposing stakeholder requirements into the system's functional, logical, and physical architecture. This led to the motivating question for this dissertation, with the aim to explore ways to improve and effectively translate the decomposition process within the RFLP framework into a system design that satisfies the stakeholder requirements. A research objective was identified with the aim to develop and implement a method that facilitates a rigorous system decomposition process in a more formal and structured manner using a set of theoretical foundations based on mathematical principles to effectively characterize a system. From this research objective, an overarching research question for this dissertation was formulated with the aim to establish structure between the product and analysis architectures during system decomposition to allow for the design of better and improved systems, especially during the conceptual stages of design. To improve the decomposition process and create a structure within the RFLP framework, Axiomatic Design Theory (ADT) was identified as the most suitable method that can aid in the structured decomposition of a system, while placing emphasis on minimizing coupling and improving the system's robustness. An in-depth examination of ADT and its potential integration with the RFLP framework revealed several limitations, which this dissertation addresses across the various research areas. The first research area focuses on improving the requirements process in ADT and RFLP. A requirements analysis process is developed to categorize stakeholder requirements into functional and non-functional requirements, provides a framework to establish the relationships between the different types of requirements, and allows for high-level requirements to be broken down into concrete and clear requirements within the product and analysis architectures. The second research area focuses on integrating concepts from Axiomatic Design Theory (ADT) into the RFLP framework. The Independence axiom from axiomatic design, together with its zigzagging attribute, is used to decompose the functional and logical layers of the RFLP framework and help create a structure during design. The third research area focuses on the identification of suitable analysis methods during system decomposition within the analysis architecture. During conceptual design, the selection of a suitable analysis method may be challenging, especially when model data is limited. The ability to properly identify a suitable analysis method facilitates informed decision-making during system design. From a combination of the three research areas, a ten-step methodology, SPAAD, is proposed that outlines the steps to perform a systems decomposition from the stakeholder requirements to the development of the functional, logical, and physical architectures for both the product and analysis architectures or domains. A test case problem involving the design of a suite of systems to aid in the fight against wildfires in remote locations substantiated the developed methodology.
  • Item
    A Data-driven Methodology for Aircraft Trajectory Analysis to Improve Mid-air Conflict Detection in Terminal Airspace
    (Georgia Institute of Technology, 2024-04-27) Zhang, Wenxin
    A mid-air collision occurs when two aircraft come into contact while airborne. It stands as one of the most devastating accidents in aviation history and remains a pressing safety issue in current flight operations. Presently, Air Traffic Control (ATC) serves as the primary means to ensure safe separation between aircraft and prevent mid-air collisions. This heavily relies on human operators, Air Traffic Controllers (ATCOs), who manage critical tasks under significant workload. The anticipated expansion of aviation, both in terms of traffic volume and diversity, particularly in terminal airspace, presents substantial challenges to the existing ATC system. The workload on ATCOs may exceed their capacity, potentially compromising safety. To address forthcoming aviation demands and maintain high safety standards, ATC is gradually integrating automated systems to assist ATCOs in transitioning from manual to supervisory roles. This dissertation is driven by the need for advanced analytics and automated decision supports concerning air traffic within terminal airspace. By leveraging Global Navigation Satellite System (GNSS) technologies, specifically Automatic Dependent Surveillance–Broadcast (ADS-B), ATC is able to access real-time and extensive historical operational data. Hence, this research presents a novel data-driven methodology to conduct thorough aircraft trajectory analysis, aiming to improve mid-air conflict detection within terminal airspace. The outlined methodology comprises three key steps: (1) traffic flow identification and recognition, (2) trajectory prediction, and (3) conflict detection. The traffic flow identification and recognition step entails two key requirements: (1) an effective method to identify air traffic flows in terminal airspace, and (2) a fast and accurate method to recognize the air traffic flow of individual flights. Achieving the first requirement demands a clustering approach capable of filtering out non-nominal trajectories commonly encountered in daily operations. While Density-Based Spatial Clustering of Applications with Noise (DBSCAN) may be applied, it can struggle with density variations in traffic flows observed in historical trajectories. Thus, Ordering Points to Identify the Clustering Structure (OPTICS) is proposed as an alternative clustering algorithm. Additionally, Weighted Euclidean Distance is suggested as a distance metric to account for the significance of different trajectory points. An experiment is designed to implement the OPTICS and DBSCAN algorithms using Weighted Euclidean Distance as the distance metric to identify air traffic flows in terminal airspace, and the results have demonstrated OPTIC's superior effectiveness in enhancing identification over DBSCAN. Addressing the second requirement involves employing a method capable of multi-class classification with rapid training and high accuracy. Ensemble models such as Random Forest and Extreme Gradient Boosting (XGBoost) provide a favorable balance between accuracy and efficiency, rendering them viable choices. Conversely, the Long Short-Term Memory (LSTM) model is anticipated to yield even higher accuracy, albeit with longer training time. An experiment is designed to implement Random Forest, XGBoost, and LSTM models for multi-class classification of aircraft trajectory segments, aiming to recognize air traffic flows of individual flights. Subsequently, their performance in terms of accuracy and training time is compared. The results of the experiment indicate that Random Forest achieves accuracy levels comparable to LSTM while significantly reducing training times. The trajectory prediction step necessitates a method for aircraft trajectory prediction. Existing methods typically employ an encoder-decoder architecture with LSTM trained on entire trajectory sets, leading to potential challenges: (1) difficulty in effectively learning hidden features due to significant differences in input trajectories, and (2) sequential nature of LSTM resulting in prolonged training durations. To overcome the first challenge, this study proposes to train multiple predictors on subsets with distinct traffic flows identified earlier, rather than a monolithic predictor on the entire dataset. An experiment is devised to implement the encoder-decoder architecture with LSTM to train a monolithic predictor and multiple predictors, on datasets containing all trajectories and subsets with distinct traffic flows respectively, then compare the accuracy and training time of the two approaches. The results have revealed that employing multiple predictors leads to increased accuracy and decreased training time compared to the single predictor approach. To address the second challenge, Transformer is proposed as an alternative to LSTM, benefiting from attention mechanisms to eliminate sequential operations and enable parallelization. An experiment is designed to train trajectory predictors for distinct traffic flows using the encoder-decoder architecture, first with LSTM and then with Transformer, followed by a comparison of the prediction accuracy and training time between the two approaches. The implementation results indicate a considerable reduction in training time and comparable accuracy achieved by Transformer compared to LSTM, particularly for extended prediction horizons. The conflict detection step requires an automated method to identify conflicts within terminal airspace, with a critical focus on addressing uncertainty. Utilizing historical trajectory data is crucial, especially in the context of aircraft position estimation, which conventionally relies solely on mathematical tools without leveraging real-world data. Kernel Density Estimation (KDE), a statistical technique for deriving Probability Density Functions (PDFs) from sampled data, emerges as a promising tool to enable robust estimation of aircraft positions based on historical trajectories. Furthermore, the intersection of PDFs from different flights serves as a means to identify potential conflicts. Hence, a novel Weighted KDE method is proposed to estimate aircraft positions by integrating outputs from traffic recognition and trajectory prediction, subsequently facilitating conflict detection in terminal airspace through the intersection of flight PDFs. To validate the proposed method, an experiment is designed to implement Weighted KDE to synthesize the outcomes of traffic flow recognition and trajectory prediction to estimate aircraft positions and then perform conflict detection by representing conflict with the intersection of aircraft position PDFs. The implementation results reveal that the conflict probabilities calculated by the Weighted KDE method show an inverse relationship with actual distances between aircraft, in both horizontal and vertical planes, thereby demonstrating the effectiveness the proposed conflict detection method. Several representative real flight scenarios serve as use cases to showcase the efficacy of the proposed data-driven methodology for analyzing aircraft trajectories to improve mid-air conflict detection in terminal airspace. The exploratory nature of this research suggests its potential evolution into a real-time decision support tool that offers conflict detection advisories for ATCOs. Transitioning from research to practical application may require real flight tests and incorporation of Real-time Assurance (RTA) mechanisms.
  • Item
    Volatile molecular species and their role in planetary surface morphology and spacecraft design and performance
    (Georgia Institute of Technology, 2024-04-27) Macias Canizares, Antonio
    The geologic processes that govern the surface morphology of ice-covered, airless (i.e., without atmospheres) bodies in the Solar System have gained increasing interest over the past several decades, both for the scientific questions such worlds present, and for the relevance for future in-situ exploration. Though engineering capabilities for landing spacecraft, such as terrain relative navigation and hazard avoidance, can mitigate against meter and sub-meter scale hazards, it is nevertheless important to understand the morphological evolution and steady state of the surface ice on these worlds. Considerable work has been dedicated to the large- and small-scale geology of these worlds, but much remains to be understood about the centimeter- to meter-scale morphology of these icy, cryogenic (~100 K) surfaces. One specific hypothesis, is that blade-like structures, called penitentes, form on the surface of Europa, and rise up to 15 meters in height, though it has been argued that the physics of penitente formation, as applied for such a hypothesis, does not apply to the exosphere and surface conditions of Europa. Interestingly, penitente-like structures have been observed on Pluto, which does have a significant, albeit seasonal, atmosphere. Penitentes are also predicted to form under certain conditions on Mars though they have yet to be observed. On Earth, penitentes are made from compact snow or ice and achieve quasi-stability in high-altitude, low-latitude regions, as the result of sublimation and melting processes, and importantly, they only occur in regions of net sublimation (or melting) loss of water. Penitentes are erosional features that form as a series of corrugated ridges and troughs that run parallel to the path of the Sun across the sky, and mature structures often yield fields of individual spikes or blades, which bear some resemblance to a pair of hands praying toward the Sun, hence the name ‘penitentes.’ The first sightings of penitentes date back to the era of Darwin, who, on a perhaps anecdotal note during his travel through Chile and La Plata, described the characteristic shape of penitentes as “...pinnacles or columns...,” and hypothesized their formation process: “...the columnar structure must be owing to a ‘metamorphic’ action, and not to a process during deposition.” As it turns out, Darwin was correct since deposition during snowfall is the end of the life cycle for a penitente field on Earth. More importantly, Darwin described in his journal the possible hazards for travel and commerce as he experienced different scenarios during his journey. Darwin attributed the discovery of Penitentes to Scoresby and later to Colonel Jackson. Nevertheless, the true discoverers were the local inhabitants who had already named places like Cerro de los Penitentes (Hill of the Penitents) and Rio de los Penitentes long before Darwin arrived. It should be noted that during the XIX century, these snow structures might not have been locally referred to as penitentes, and even nowadays, the name penitentes is often termed after the physical process causing their formation and not their characteristic shape. Hence, places such as the Cerro de los Penitentes were most likely named after the penitents (repenting people) from the church. To advance our understanding of the surface morphology of airless, ice-covered worlds, and to address the limitations of current models, the work in this dissertation focused on developing numerical models that accurately represent the irradiance and physical evolution of ice on such worlds and used those models to investigate the possible presence of penitentes on Europa and their hazardous implications for a future lander.
  • Item
    A Multi-Objective Deep Learning Methodology for Morphing Wings
    (Georgia Institute of Technology, 2024-04-27) Achour, Gabriel
    Due to design constraints, conventional aircraft cannot achieve maximum aerodynamic performance when operating under varying missions and weather conditions. One of these constraints is the traditional approach of optimizing aircraft wings to achieve the best average aerodynamic performance for a specific mission while maintaining structural integrity. Previous studies have shown that changing the shape of wings at different points of a mission profile improves the aerodynamic performance of aircraft. As such, stakeholders have explored the viability and feasibility of changing or morphing the shape of aircraft wings to enable aircraft to adapt to varying missions and weather conditions. However, as with any other aspect of aircraft design, some challenges currently exist that hinder the development of conventional aircraft with morphing wings. First, the computational cost of flow solvers makes aerodynamic shape optimization time-consuming and computationally expensive due to its iterative nature. When designing a morphing wing, different configurations are computed for different points in the flight envelope, multiplying the computational cost necessary for morphing wing aircraft design. Consequently, a framework capable of performing shape optimization at a reduced computational cost is needed. Second, morphing can lead to a high variation of wing shapes, generating high aerodynamic loads and minimizing the aerodynamic benefits of morphing wings. Moreover, structural analyses are also computationally expensive, replicating the same challenges as aerodynamic optimization. As such, a multi-objective framework capable of optimizing morphing wings to increase aerodynamic efficiency while addressing aeroelastic constraints at a lower computational cost is needed. Finally, even though changing the shape of an aircraft’s wing at each segment of a mission profile is the most efficient approach to maximize the benefits of morphing wings, this is not ideal as flight and weather conditions are not constant throughout the flight segment. A framework that can adapt the wing shapes to varying flow conditions during the flight is needed. Consequently, this thesis aims to address these gaps by 1) developing a Conditional Generative Adversarial Network-based algorithm capable of generating optimal wing shapes of a morphing wing vehicle for each segment of a given mission profile, 2) training a Reinforcement Learning agent to modify the optimized shape and design the wing structure to ensure the structural integrity of morphing wings throughout the flight while maintaining a high aerodynamic performance 3) implementing a Meta Reinforcement Learning agent to make aircraft wings adapt their shapes to variations in flow conditions during each mission segment. The experiments outlined in this thesis involve designing each network architecture, collecting the training datasets, and training each model. These models are then applied to various aerodynamic and aero-structural optimization tasks across various demonstrated morphing wing mechanisms. Each model demonstrated accurate optimization results when compared to classical optimization methods. Additionally, the results indicate a significant reduction in computational power required by the deep learning models. As such, this thesis demonstrates the immense benefits of training and implementing deep learning models to perform various optimization tasks related to morphing wing aircraft design at a lower computational cost than traditional optimization algorithms. Furthermore, this thesis demonstrates the benefits of morphing wings throughout flight to maximize aerodynamic efficiency while minimizing structural constraints, which can lead to a non-negligible fuel consumption economy. Finally, this thesis demonstrates how meta-learning can be applied to continuously adapt the shape of a wing to unexpected changes in flow conditions throughout flight.
  • Item
    Direct and Large-Eddy Simulations of Spatially Evolving Supercritical Turbulent Shear Layers
    (Georgia Institute of Technology, 2024-04-27) Purushotham, Dhruv
    A given pure component supercritical fluid at a thermodynamic state in the vicinity of its critical point exhibits significant susceptibility to perturbations in the state. The variation of the thermodynamic and transport properties at these loci are strongly nonlinear as a result of non-negligible intermolecular forces in the fluid. These nonlinearities stress the formulations of existing LES subfilter closures, which are derived based on assumptions that break down at these states. The performance of certain subfilter closures under these conditions is largely unclear and the extension of this argument to multi-component settings adds further uncertainty. The research in this dissertation aims to address a judiciously selected subset of these concerns through a multi-faceted approach based on the joint application of the DNS and LES techniques. Specific outcomes of the research are as follows. First, the DNS data set produced for this work shows that Lagrangian enstrophy is amplified by baroclinicity in an instantaneous sense, and is likely associated with highly-strained local vortical structures. At certain times, the baroclinic contribution can be as much as roughly half the dominant vortex stretching contribution. However, the importance of baroclinicity in the mean diminishes. Enstrophy generation through elemental dilatation is also instantaneously significant, but diminishes in the mean. A detailed analysis of turbulence anisotropy shows that some select points within the shear layer are subject to statistically two, or even one-component turbulence, implying attenuation likely stemming from regions of high density gradient magnitude which are known to appear in systems at these conditions. This is a particular manifestation of the thermo-fluid coupling present in such flows. Comparisons between three LES calculations indicate that coarser grids result in higher shear layer growth rates relative to that predicted by the reference DNS data. An evaluation of turbulent kinetic energy spectra and transport property ratios indicates that this could be a result of over-active subfilter models. Mean molecular transport properties are found to rival their corresponding turbulent analogs, and this is likely a unique behavior due to the thermodynamic setting. The rough equivalence of the molecular transport properties to their turbulent counterparts essentially doubles the action of the diffusive operator in the filtered system of equations, thus imparting additional diffusion to the field. This helps correct for amplified field anisotropies which likely arise not only naturally from the lack of grid resolution at the coarse limit, but also from the presence of regions of high density gradient magnitude which attenuate turbulent fluctuations and inhibit mixing. In this light, the extra diffusion imparted by the models serves as a corrective mechanism, however, it appears that in this thermodynamic setting in the coarse grid limit, the specific models employed ought to be attenuated to some level, given the mismatch in shear layer growth rates. Finally, to isolate and analyze subfilter model performance in a rigorous fashion, an a priori analysis of three classes of subfilter closures is performed. The results indicate that, as expected, the dynamic mixed class of closure performs best. However, quantitative data from this analysis indicates that performing LES using the mixed dynamic closures at grid resolutions 4-5x coarser in each coordinate direction than the required DNS resolution at a given Reynolds number yields acceptable performance. At these resolutions, modeled subfilter stresses remain well correlated with the true subfilter stresses, however, the coarsening represents significant computational savings which can aid engineering design in practical settings. The specific resolution guideline here in particular represents a novel outcome of this research in the area of subfilter modeling for LES.