Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 778
  • Item
    Studies of turbulence structure using well-resolved simulations with and without effects of a magnetic field
    (Georgia Institute of Technology, 2018-12-20) Zhai, Xiaomeng
    This thesis presents results from a large-scale computational study motivated to advance understanding of turbulence structure in isotropic turbulence as well as in magnetohydrodynamic (MHD) turbulence at low magnetic Reynolds number. Direct numerical simulations (DNS) are performed using state-of-the-art massively parallel computers with the care in the choice of the simulation parameters so that the small scales are adequately resolved and the large scales are well contained in the simulation domains. Results of isotropic turbulence provide clarifications not only on the topological features of the small scale motions that take large amplitudes, but also on the values of cancellation exponent which quantifies the sign oscillation characteristics. For topics in MHD turbulence, a central theme is the anisotropy development from initial conditions that are either isotropic, or those that contain some degree of anisotropy resulting from axisymmetric contraction. Scalar mixing in MHD turbulence is also studied briefly, with or without a mean scalar gradient.
  • Item
    Adjoint-based aeroelastic optimization with high-fidelity time-accurate analysis
    (Georgia Institute of Technology, 2018-12-18) Jacobson, Kevin Edward
    A methodology is proposed for adjoint-based sensitivities of steady and time-accurate aeroelastic analysis with high-fidelity models based on computational fluid dynamics and structural finite element modeling. The proposed methodology allows for aerodynamic, structural, and aeroelastic constraints to be formulated, and expressions for sensitivities with respect to aerodynamic, structural, and geometric design variables are derived and verified. Additionally, two types of explicit aeroelastic constraints are presented: flutter constraints based on the matrix pencil method and gust response constraints based on the field velocity method. Optimizations based on the proposed methodology and explicit aeroelastic constraints are demonstrated with two-dimensional and three-dimensional aerospace problems.
  • Item
    Formal verification and validation of convex optimization algorithms for model predictive control
    (Georgia Institute of Technology, 2018-12-13) Cohen, Raphael P.
    The efficiency of modern optimization methods, coupled with increasing computational resources, has led to the possibility of real-time optimization algorithms acting in safety critical roles. However, this cannot happen without addressing proper attention to the soundness of these algorithms. This PhD thesis discusses the formal verification of convex optimization algorithms with a articular emphasis on receding-horizon controllers. Additionally, we demonstrate how theoretical proofs of real-time optimization algorithms can be used to describe functional properties at the code level, thereby making it accessible for the formal methods community. In seeking zero-bug software, we use the Credible Autocoding scheme. We focused our attention on the ellipsoid algorithm solving second-order cone programs (SOCP). In addition to this, we present a floating-point analysis of the algorithm and give a framework to numerically validate the method.
  • Item
    A methodology to support relevant comparisons of Earth-Mars communication architectures
    (Georgia Institute of Technology, 2018-12-11) Duveiller, Florence B.
    Because of the human imperative for exploration, it is very likely that a manned mission to Mars occurs by the end of the century. Mars is one of the two closest planets to Earth. It is very similar to the Earth and could be suitable to host a manned settlement. Sending humans to Mars is a technological challenge above all. Among the technologies needed, some of the most important relate to communications. Women and men on Mars need to be able to receive support from the Earth, communicate with other human beings on Earth and to send back the data collected. A reliable and continuous communication link has to be provided between Earth and Mars to ensure a safe journey to Mars. However, the communication between the Earth and Mars is challenging because of the distance between the two planets and because of the obstruction by the Sun that occurs for about 21 days every 780 days. Because of the cost of communication systems and the number of exploration missions to Mars, it has been established that a permanent communication architecture between the Earth and Mars is the most profitable option. From these observations, the research goal established for this thesis is to enable reliable and continuous communications between the Earth and Mars through the design of a permanent communication architecture. A literature review of the communication architectures between Earth and Mars revealed that a lot of concepts have been offered by different authors over the last thirty years. However, when investigating ways to compare the variety of existing architectures, it becomes very apparent that there were no robust, traceable and rigorous approach to do so. The comparisons made in the literature were incomplete. The requirements driving the design the architectures were not defined or quantified. The assumptions on which the comparisons are based were different from one architecture to another, and from one comparative study to another. As a result, all the comparisons offered were inconsistent. This thesis addresses those gaps by developing a methodology that enables relevant and consistent comparisons of Earth-Mars communication architectures and supports gap analysis. The methodology is composed of three steps. The first step consists in defining the requirements and organizing them to emphasize their interactions with the different parts of the communication system (the architecture, the hardware and the software). A study of the requirements for a deep-space communication architecture supporting manned missions is performed. A set of requirements is chosen for the present work. The requirements are mapped against the communication system. The second step consists in implementing and evaluating the architectures. To ensure the consistency, the repeatably and the transparency of the methodology developed, a unique approach enabling the assessment of all the architectures based on the same assumptions has to be provided. A framework is designed in a modeling and simulation environment for this purpose. The environment chosen for this thesis is the software Systems Tool Kit (STK) because of its capabilities. A survey of the existing architectures is performed, the metrics to evaluate the architectures are defined, and the architectures are evaluated. The third step of the methodology consists in ranking the alternatives for different weighting scenarios. Four weighting scenarios are selected to illustrate some interesting trades. The ranking of the architectures is performed through a decision-making algorithm, a Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The results from the different weighting scenarios are discussed. They underline the incompleteness of the comparisons performed in past studies, the lack of design space exploration for Earth-Mars communication architectures and the importance of the definition of the set of requirements when designing and comparing architectures. This research provides a transparent and repeatable methodology to rank and determine the best Earth-Mars communication architectures for a set of chosen requirements. It fills several gaps in the comparison of Earth-Mars communication architectures: the lack of definition of the requirements, the lack of a unique approach to implement and assess the architectures based on the same assumptions, and the lack of a process to compare all the architectures rigorously. Before the present research, there was no robust, consistent and rigorous means to rank and quantitatively compare the architectures. The methodology not only ranks but also quantitatively compares the architectures; it can quantifies the differences between architectures for an infinite number of scenarios. It has various capabilities including ranking Earth-Mars architectures based on a chosen set of requirements, performing gap analysis and sensitivities analysis on communication technologies and protocols, and performing design space exploration on architectures. The methodology developed is demonstrated on a restricted scope, it aims at being extended.
  • Item
    Development and validation of 3-D cloud fields using data fusion and machine learning techniques
    (Georgia Institute of Technology, 2018-12-11) Huguenin, Manon
    The impact of climate change is projected to significantly increase over the next decades. Consequently, gaining a better understanding of climate change and being able to accurately predict its effects are of the upmost importance. Climate change predictions are currently achieved using Global Climate Models (GCMs), which are complex representations of the major climate components and their interactions. However, these predictions present high levels of uncertainty, as illustrated by the very disparate results GCMs generate. According to the International Panel on Climate Change (IPCC), there is high confidence that such high levels of uncertainty are due to the way clouds are represented in climate models. Indeed, several cloud phenomena, such as the cloud-radiative forcing, are not well- modeled in GCMs because they rely on miscroscopic processes that, due to computational limitations, cannot be represented in GCMs. Such phenomena are instead represented through physically-motivated parameterizations, which lead to uncertainties in cloud representations. For these reasons, improving the parameterizations required for representing clouds in GCMs is a current focus of climate modeling research efforts. Integrating cloud satellite data into GCMs has been proved to be essential to the development and assessment of cloud radiative transfer parameterizations. Cloud-related data is captured by a variety of satellites, such as satellites from NASA’s afternoon constellation (also named the A-train), which collect vertical and horizontal data on the same orbital track. Data from the A-train has been useful to many studies on cloud prediction, but its coverage is limited. This is due to the fact that the sensors that collect vertical data have very narrow swaths, with a width as small as one kilometer. As a result, the area where vertical data exists is very limited, equivalent to a 1-kilometer-wide track. Thus, in order for satellite cloud data to be compared to global representations of clouds in GCMs, additional vertical cloud data has to be generated to provide a more global coverage. Consequently, the overall objective of this thesis is to support the validation of GCMs cloud representations through the generation of 3D cloud fields using cloud vertical data from space-borne sensors. This has already been attempted by several studies through the implementation of physics-based and similarity-based approaches. However, such studies have a number of limitations, such as the inability to handle large amounts of data and high resolutions, or the inability to account for diverse vertical profiles. Such limitations motivate the need for novel approaches in the generation of 3D cloud fields. For this purpose, efforts have been initiated at ASDL to develop an approach that leverages data fusion and machine learning techniques to generate 3D cloud field domains. Several successive ASDL-led efforts have helped shape this approach and overcome some of its challenges. In particular, these efforts have led to the development of a cloud predictive classification model that is based on decision trees and integrates atmospheric data to predict vertical cloud fraction. This model was evaluated against “on-track” cloud vertical data, and was found to have an acceptable performance. However, several limitations were identified in this model and the approach that led to it. First, its performance was lower when predicting lower-altitude clouds, and its overall performance could still be greatly improved. Second, the model had only been assessed at “on-track” locations, while the construction of data at “off-track” locations is necessary for generating 3D cloud fields. Last, the model had not been validated in the context of GCMs cloud representation, and no satisfactory level of model accuracy had been determined in this context. This work aims at overcoming these limitations by taking the following approach. The model obtained from previous efforts is improved by integrating additional, higher-accuracy data, by investigating the correlation within atmospheric predictors, and by implementing additional classification machine learning techniques, such as Random Forests. Then, the predictive model is performed at “off-track” locations, using predictors from NASA’s LAADS datasets. Horizontal validation of the computed profiles is performed against an existing dataset containing the Cloud Mask at the same locations. This leads to the generation of a coherent global 3D cloud fields dataset. Last, a methodology for validating this computed dataset in the context of GCMs cloud-radiative forcing representation is developed. The Fu-Liou code is implemented on sample vertical profiles from the computed dataset, and the output radiative fluxes are analyzed. This research significantly improves the model developed in previous efforts, as well validates the computed global dataset against existing data. Such validation demonstrates the potential of a machine learning-based approach to generate 3D cloud fields. Additionally, this research provides a benchmarked methodology to further validate this machine learning-based approach in the context of study. Altogether, this thesis contributes to NASA’s ongoing efforts towards improving GCMs and climate change predictions as a whole.
  • Item
    Computational fluid dynamics simulation of three-dimensional parallel jets
    (Georgia Institute of Technology, 2018-12-11) Liu, Zhihang
    High-speed air jets are often used in industry for manufacturing thin fibers through a process known as melt-blowing (MB). In melt blowing, high-velocity gas streams impinge upon molten strands of polymer to produce fine filaments. For a very high quantity of fibers to be produced, many small-scale jets placed side by side are needed, these jets draw the air from the same compressed air storage tank, so the fiber formation is critically dependent on the aerodynamics of the impingement jet flow field. However, the real-word MB devices always have complicate internal structures such as mixing chambers and air channels between air tank and die tip, which may cause instability and cross flow in the jet flow filed and had a significant impact on the formation of fibers and non-woven webs with small scale jets. The purpose of this study was inspired by the necessity to understand the effect of the internal geometry on the jet flow filed and tried to prevent the flow instability with fluctuation reduction devices. The MB process in this study was modeled as a pair of two jets placed at an angle of approximately 60 degrees to each other, and when there are many such jet pairs, a stream so that multiple streams of fibers may be simultaneously produced. All internal structures of the MB device were modeled based on US Patent 6,972,104 B2 by Haynes et al. The flow field resulting from the two similar converging-plane jet nozzles was investigated using a computational fluid dynamics approach. The case in which there are flow fluctuation reduction devices installed and the case without the devices installed were studied. The k-ω turbulence model was used, and the model parameters were calculated according to the inlet conditions of the air flow. This study consists of three parts: (a) a baseline case without any flow fluctuation reduction devices was studied to understand the mechanism of the instability and to investigate the details of the internal flow filed; (b) a wired mesh screen was placed between the air plates and the die tip, to study the effect on both the velocity and pressure distribution across the screen; (c) a honeycomb installed near the exit of last mixing chamber trying to reduce the velocity across the flow direction and turbulent intensity. Finally, the effect of the two different flow fluctuation reduction devices was compared in detail using time series measurements and time average flow contours.
  • Item
    A methodology for conducting design trades related to advanced in-space assembly
    (Georgia Institute of Technology, 2018-12-07) Jara de Carvalho Vale de Almeida, Lourenco
    In the decades since the end of the Apollo program, manned space missions have been confined to Low Earth Orbit. Today, ambitious efforts are underway to return astronauts to the surface of the Moon, and eventually reach Mars. Technical challenges and dangers to crew health and well-being will require innovative solutions. The use of In-Space Assembly (ISA) can provide critical new capabilities, by freeing designs from the size limitations of launch vehicles. ISA can be performed using different strategies. The current state-of-the-art strategy is to dock large modules together. Future technologies, such as welding in space, will unlock more advanced strategies. Advanced assembly strategies deliver smaller component pieces to orbit in highly efficient packaging but require lengthy assembly tasks to be performed in space. The choice of assembly strategy impacts the cost and duration of the entire mission. As a rule, simpler strategies require more deliveries, increasing costs, while advanced strategies require more assembly tasks, increasing time. The effects of these design choices must be modeled in order to conduct design trades. A methodology to conduct these design trades is presented. It uses a model of the logistics involved in assembling a space system, including deliveries and assembly tasks. The model employs a network formulation, where the pieces of a structure must flow from their initial state to a final assembly state, via arcs representing deliveries and assembly tasks. By comparing solutions obtained under different scenarios, additional design trades can be performed. This methodology is applied to the case of an Artificial Gravity Space Station. Results for the assembly of this system are obtained for a baseline scenario and compared with results after varying parameters such as the delivery and storage capacity. The comparison reveals the sensitivities of the assembly process to each parameter and the benefits that can be gained from certain improvements, demonstrating the effectiveness of the methodology.
  • Item
    Relative Positioning and Tracking of Tethered Small Spacecraft Using Optical Sensors
    (Georgia Institute of Technology, 2018-12) Guo, Yanjie
  • Item
    Source location of subsonic and supersonic jets of various geometeries via acoustic beamforming
    (Georgia Institute of Technology, 2018-11-30) Breen, Nicholas Paul
    Over the years, the need to understand and reduce aircraft noise emissions has led numerous researchers to apply various source location techniques to jet noise. Prior to 1985, several methods for determining jet-noise source locations were explored: acoustic mirrors, microphone arrays, two-microphone methods, causality correlation and coherence techniques, nearfield contour surveys, and automated source breakdown. More recently there have been developments in the microphone array, notably acoustic beamforming, and two-microphone method techniques. Many of the older techniques require significant amount of time to acquire data at each jet condition; this requirement is often caused by the necessity to move microphones in order to obtain source locations at all frequencies. The acoustic beamformer does not need to be moved during the acquisition of data, resulting in very rapid tests compared to other source-location methods. Upon examination of prior studies containing jet noise source location measurements, it is clear that there are a few areas in the field that need additional work: (1) no study has compared the results of the acoustic beamforming method with another method using the same nozzles and facilities, (2) no study has been performed that analyzes the effects of differing nozzle geometry, and hence the nozzle exit boundary layer, on the jet noise source location, (3) no study has performed a detailed analysis of the noise source distributions of supersonic jets, and (4) no study has examined the noise source distribution of twin jets and the effect of separation distance on the said distribution. The goal of this thesis is to systematically address these areas with the use of source location measurements, schlieren flow visualization, farfield spectra, and jet velocity measurements. The source location measurements are primarily acquired using an acoustic beamformer. Jet velocity measurements include both nozzle exit boundary layer profiles and downstream velocity profiles and are obtained with the use of boundary layer probes and particle imaging velocimetry.
  • Item
    Large scale stochastic control: Algorithms, optimality and stability
    (Georgia Institute of Technology, 2018-11-29) Bakshi, Kaivalya Sanjeev
    Optimal control of large-scale multi-agent networked systems which describe social networks, macro-economies, traffic and robot swarms is a topic of interest in engineering, biophysics and economics. A central issue is constructing scalable control-theoretic frameworks when the number of agents is infinite. In this work, we exploit PDE representations of the optimality laws in order to provide a tractable approach to ensemble (open loop) and closed loop control of such systems. A centralized open loop optimal control problem of an ensemble of agents driven by jump noise is solved by a sampling algorithm based on the infinite dimensional minimum principle to solve it. The relationship between the infinite dimensional minimum principle and dynamic programming principles is established for this problem. Mean field game (MFG) models expressed as PDE systems are used to describe emergent phenomenon in decentralized feedback optimal control models of a continuum of interacting agents with stochastic dynamics. However, stability analysis of MFG models remains a challenging problem, since they exhibit non-unique solutions in the absence of a monotonicity assumption on the cost function. This thesis addresses the key issue of stability and control design in MFGs. Specifically, we present detailed results on a models for flocking and population evolution. An interesting connection between MFG models and the imaginary-time Schr¨odinger equation is used to obtain explicit stability constraints on the control design in the case of non-interacting agents. Compared to prior works on this topic which apply only to agents obeying very simple integrator dynamics, we treat nonlinear agent dynamics and also provide analytical design constraints.