Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 420
  • Item
    A methodology to support relevant comparisons of Earth-Mars communication architectures
    (Georgia Institute of Technology, 2018-12-11) Duveiller, Florence B.
    Because of the human imperative for exploration, it is very likely that a manned mission to Mars occurs by the end of the century. Mars is one of the two closest planets to Earth. It is very similar to the Earth and could be suitable to host a manned settlement. Sending humans to Mars is a technological challenge above all. Among the technologies needed, some of the most important relate to communications. Women and men on Mars need to be able to receive support from the Earth, communicate with other human beings on Earth and to send back the data collected. A reliable and continuous communication link has to be provided between Earth and Mars to ensure a safe journey to Mars. However, the communication between the Earth and Mars is challenging because of the distance between the two planets and because of the obstruction by the Sun that occurs for about 21 days every 780 days. Because of the cost of communication systems and the number of exploration missions to Mars, it has been established that a permanent communication architecture between the Earth and Mars is the most profitable option. From these observations, the research goal established for this thesis is to enable reliable and continuous communications between the Earth and Mars through the design of a permanent communication architecture. A literature review of the communication architectures between Earth and Mars revealed that a lot of concepts have been offered by different authors over the last thirty years. However, when investigating ways to compare the variety of existing architectures, it becomes very apparent that there were no robust, traceable and rigorous approach to do so. The comparisons made in the literature were incomplete. The requirements driving the design the architectures were not defined or quantified. The assumptions on which the comparisons are based were different from one architecture to another, and from one comparative study to another. As a result, all the comparisons offered were inconsistent. This thesis addresses those gaps by developing a methodology that enables relevant and consistent comparisons of Earth-Mars communication architectures and supports gap analysis. The methodology is composed of three steps. The first step consists in defining the requirements and organizing them to emphasize their interactions with the different parts of the communication system (the architecture, the hardware and the software). A study of the requirements for a deep-space communication architecture supporting manned missions is performed. A set of requirements is chosen for the present work. The requirements are mapped against the communication system. The second step consists in implementing and evaluating the architectures. To ensure the consistency, the repeatably and the transparency of the methodology developed, a unique approach enabling the assessment of all the architectures based on the same assumptions has to be provided. A framework is designed in a modeling and simulation environment for this purpose. The environment chosen for this thesis is the software Systems Tool Kit (STK) because of its capabilities. A survey of the existing architectures is performed, the metrics to evaluate the architectures are defined, and the architectures are evaluated. The third step of the methodology consists in ranking the alternatives for different weighting scenarios. Four weighting scenarios are selected to illustrate some interesting trades. The ranking of the architectures is performed through a decision-making algorithm, a Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The results from the different weighting scenarios are discussed. They underline the incompleteness of the comparisons performed in past studies, the lack of design space exploration for Earth-Mars communication architectures and the importance of the definition of the set of requirements when designing and comparing architectures. This research provides a transparent and repeatable methodology to rank and determine the best Earth-Mars communication architectures for a set of chosen requirements. It fills several gaps in the comparison of Earth-Mars communication architectures: the lack of definition of the requirements, the lack of a unique approach to implement and assess the architectures based on the same assumptions, and the lack of a process to compare all the architectures rigorously. Before the present research, there was no robust, consistent and rigorous means to rank and quantitatively compare the architectures. The methodology not only ranks but also quantitatively compares the architectures; it can quantifies the differences between architectures for an infinite number of scenarios. It has various capabilities including ranking Earth-Mars architectures based on a chosen set of requirements, performing gap analysis and sensitivities analysis on communication technologies and protocols, and performing design space exploration on architectures. The methodology developed is demonstrated on a restricted scope, it aims at being extended.
  • Item
    Development and validation of 3-D cloud fields using data fusion and machine learning techniques
    (Georgia Institute of Technology, 2018-12-11) Huguenin, Manon
    The impact of climate change is projected to significantly increase over the next decades. Consequently, gaining a better understanding of climate change and being able to accurately predict its effects are of the upmost importance. Climate change predictions are currently achieved using Global Climate Models (GCMs), which are complex representations of the major climate components and their interactions. However, these predictions present high levels of uncertainty, as illustrated by the very disparate results GCMs generate. According to the International Panel on Climate Change (IPCC), there is high confidence that such high levels of uncertainty are due to the way clouds are represented in climate models. Indeed, several cloud phenomena, such as the cloud-radiative forcing, are not well- modeled in GCMs because they rely on miscroscopic processes that, due to computational limitations, cannot be represented in GCMs. Such phenomena are instead represented through physically-motivated parameterizations, which lead to uncertainties in cloud representations. For these reasons, improving the parameterizations required for representing clouds in GCMs is a current focus of climate modeling research efforts. Integrating cloud satellite data into GCMs has been proved to be essential to the development and assessment of cloud radiative transfer parameterizations. Cloud-related data is captured by a variety of satellites, such as satellites from NASA’s afternoon constellation (also named the A-train), which collect vertical and horizontal data on the same orbital track. Data from the A-train has been useful to many studies on cloud prediction, but its coverage is limited. This is due to the fact that the sensors that collect vertical data have very narrow swaths, with a width as small as one kilometer. As a result, the area where vertical data exists is very limited, equivalent to a 1-kilometer-wide track. Thus, in order for satellite cloud data to be compared to global representations of clouds in GCMs, additional vertical cloud data has to be generated to provide a more global coverage. Consequently, the overall objective of this thesis is to support the validation of GCMs cloud representations through the generation of 3D cloud fields using cloud vertical data from space-borne sensors. This has already been attempted by several studies through the implementation of physics-based and similarity-based approaches. However, such studies have a number of limitations, such as the inability to handle large amounts of data and high resolutions, or the inability to account for diverse vertical profiles. Such limitations motivate the need for novel approaches in the generation of 3D cloud fields. For this purpose, efforts have been initiated at ASDL to develop an approach that leverages data fusion and machine learning techniques to generate 3D cloud field domains. Several successive ASDL-led efforts have helped shape this approach and overcome some of its challenges. In particular, these efforts have led to the development of a cloud predictive classification model that is based on decision trees and integrates atmospheric data to predict vertical cloud fraction. This model was evaluated against “on-track” cloud vertical data, and was found to have an acceptable performance. However, several limitations were identified in this model and the approach that led to it. First, its performance was lower when predicting lower-altitude clouds, and its overall performance could still be greatly improved. Second, the model had only been assessed at “on-track” locations, while the construction of data at “off-track” locations is necessary for generating 3D cloud fields. Last, the model had not been validated in the context of GCMs cloud representation, and no satisfactory level of model accuracy had been determined in this context. This work aims at overcoming these limitations by taking the following approach. The model obtained from previous efforts is improved by integrating additional, higher-accuracy data, by investigating the correlation within atmospheric predictors, and by implementing additional classification machine learning techniques, such as Random Forests. Then, the predictive model is performed at “off-track” locations, using predictors from NASA’s LAADS datasets. Horizontal validation of the computed profiles is performed against an existing dataset containing the Cloud Mask at the same locations. This leads to the generation of a coherent global 3D cloud fields dataset. Last, a methodology for validating this computed dataset in the context of GCMs cloud-radiative forcing representation is developed. The Fu-Liou code is implemented on sample vertical profiles from the computed dataset, and the output radiative fluxes are analyzed. This research significantly improves the model developed in previous efforts, as well validates the computed global dataset against existing data. Such validation demonstrates the potential of a machine learning-based approach to generate 3D cloud fields. Additionally, this research provides a benchmarked methodology to further validate this machine learning-based approach in the context of study. Altogether, this thesis contributes to NASA’s ongoing efforts towards improving GCMs and climate change predictions as a whole.
  • Item
    Computational fluid dynamics simulation of three-dimensional parallel jets
    (Georgia Institute of Technology, 2018-12-11) Liu, Zhihang
    High-speed air jets are often used in industry for manufacturing thin fibers through a process known as melt-blowing (MB). In melt blowing, high-velocity gas streams impinge upon molten strands of polymer to produce fine filaments. For a very high quantity of fibers to be produced, many small-scale jets placed side by side are needed, these jets draw the air from the same compressed air storage tank, so the fiber formation is critically dependent on the aerodynamics of the impingement jet flow field. However, the real-word MB devices always have complicate internal structures such as mixing chambers and air channels between air tank and die tip, which may cause instability and cross flow in the jet flow filed and had a significant impact on the formation of fibers and non-woven webs with small scale jets. The purpose of this study was inspired by the necessity to understand the effect of the internal geometry on the jet flow filed and tried to prevent the flow instability with fluctuation reduction devices. The MB process in this study was modeled as a pair of two jets placed at an angle of approximately 60 degrees to each other, and when there are many such jet pairs, a stream so that multiple streams of fibers may be simultaneously produced. All internal structures of the MB device were modeled based on US Patent 6,972,104 B2 by Haynes et al. The flow field resulting from the two similar converging-plane jet nozzles was investigated using a computational fluid dynamics approach. The case in which there are flow fluctuation reduction devices installed and the case without the devices installed were studied. The k-ω turbulence model was used, and the model parameters were calculated according to the inlet conditions of the air flow. This study consists of three parts: (a) a baseline case without any flow fluctuation reduction devices was studied to understand the mechanism of the instability and to investigate the details of the internal flow filed; (b) a wired mesh screen was placed between the air plates and the die tip, to study the effect on both the velocity and pressure distribution across the screen; (c) a honeycomb installed near the exit of last mixing chamber trying to reduce the velocity across the flow direction and turbulent intensity. Finally, the effect of the two different flow fluctuation reduction devices was compared in detail using time series measurements and time average flow contours.
  • Item
    A methodology for conducting design trades related to advanced in-space assembly
    (Georgia Institute of Technology, 2018-12-07) Jara de Carvalho Vale de Almeida, Lourenco
    In the decades since the end of the Apollo program, manned space missions have been confined to Low Earth Orbit. Today, ambitious efforts are underway to return astronauts to the surface of the Moon, and eventually reach Mars. Technical challenges and dangers to crew health and well-being will require innovative solutions. The use of In-Space Assembly (ISA) can provide critical new capabilities, by freeing designs from the size limitations of launch vehicles. ISA can be performed using different strategies. The current state-of-the-art strategy is to dock large modules together. Future technologies, such as welding in space, will unlock more advanced strategies. Advanced assembly strategies deliver smaller component pieces to orbit in highly efficient packaging but require lengthy assembly tasks to be performed in space. The choice of assembly strategy impacts the cost and duration of the entire mission. As a rule, simpler strategies require more deliveries, increasing costs, while advanced strategies require more assembly tasks, increasing time. The effects of these design choices must be modeled in order to conduct design trades. A methodology to conduct these design trades is presented. It uses a model of the logistics involved in assembling a space system, including deliveries and assembly tasks. The model employs a network formulation, where the pieces of a structure must flow from their initial state to a final assembly state, via arcs representing deliveries and assembly tasks. By comparing solutions obtained under different scenarios, additional design trades can be performed. This methodology is applied to the case of an Artificial Gravity Space Station. Results for the assembly of this system are obtained for a baseline scenario and compared with results after varying parameters such as the delivery and storage capacity. The comparison reveals the sensitivities of the assembly process to each parameter and the benefits that can be gained from certain improvements, demonstrating the effectiveness of the methodology.
  • Item
    Relative Positioning and Tracking of Tethered Small Spacecraft Using Optical Sensors
    (Georgia Institute of Technology, 2018-12) Guo, Yanjie
  • Item
    A Preliminary Assessment of the RANGE Mission's Orbit Determination Capabilities
    (Georgia Institute of Technology, 2018-08) Claybrook, Austin W.
  • Item
    Probabilistic Resident Space Object Detection Using Archival THEMIS Fluxgate Magnetometer Data
    (Georgia Institute of Technology, 2018-05-02) Brew, Julian ; Holzinger, Marcus J.
    Although the detection of Earth-orbiting space objects is generally achieved using optical and radar measurements, these methods are limited in the ca pability of detecting small space objects at geosynchronous altitudes. This paper examines the use of magnetometers to detect plausible flyby encoun ters with charged space objects using a matched filter signal existence binary hypothesis test approach on archival fluxgate magnetometer data from the NASA THEMIS mission. Relevant data-set processing and reduction is dis cussed in detail. Using the proposed methodology, 285 plausible detections are claimed and several are reviewed in detail. Keywords: resident space objects; matched filter; admissible region; geostationary orbit; binary hypothesis testing
  • Item
    A scalable hardware-in-the-Loop simulation for satellite constellations and other multi-agent networks
    (Georgia Institute of Technology, 2018-05-01) DeGraw, Christopher F.
  • Item
    Coulomb-Force Based Control Methods for an n-Spacecraft Reconfiguration Maneuver
    (Georgia Institute of Technology, 2018-05-01) Swenson, Jason C.
    In an electrically-charged space plasma environment, spacecraft Coulomb forces are shown as a potential propellant-free alternative for an n-spacecraft formation reconfiguration maneu ver with nd deputy spacecraft. Two Coulomb force based methods (and one method without Coulomb forces) for reconfiguration maneuvers are developed, tested, and evaluated. Method 1a applies Direct Multiple Shooting in order to calculate the optimal thrust inputs of a min imum fuel trajectory. Method 1b uses the results from Method 1a to compare the optimal thrust input to the set of all possible resultant Coulomb force vectors at each point in time along a trajectory. Method 2, formulated from optimal control theory, solves directly for nd spacecraft charge states at each point in time with Clohessy-Wiltshire relative dynamics and minimizes the final relative state vector error. The overall performance of Method 2 is shown to be superior than that of Method 1b in terms of both relative state vector error and total computational time. Furthermore, Method 2 shows performance comparable to the optimal minimum fuel trajectory calculated in Method 1a.
  • Item
    Early Collision and Fragmentation Detection of Space Objects without Orbit Determination
    (Georgia Institute of Technology, 2018-05-01) Axon, Lyndy E.
    This paper demonstrates that from using the hypothesized constraint of the admissible regions it is possible to determine if a combination of new uncorrelated debris objects have a common origin that also intersects with a known catalog object orbit, thus indicating a collision or fragmentation has occurred. Admissible region methods are used to bound the feasible orbit solutions of multiple observations using constraints on energy and radius of periapsis, propagating them to a common epoch in the past, and using sequential quadratic programming optimization to find a set of solution states that minimize the Euclidean distance between the observations at that time. If this given this set of solutions intersects with a catalog object orbit, then that object is the probabilistic source of the debris objects. This proposed method is demonstrated on an example of a low-earth object observation.