Organizational Unit:
Aerospace Systems Design Laboratory (ASDL)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 18
  • Item
    A methodology to support relevant comparisons of Earth-Mars communication architectures
    (Georgia Institute of Technology, 2018-12-11) Duveiller, Florence B.
    Because of the human imperative for exploration, it is very likely that a manned mission to Mars occurs by the end of the century. Mars is one of the two closest planets to Earth. It is very similar to the Earth and could be suitable to host a manned settlement. Sending humans to Mars is a technological challenge above all. Among the technologies needed, some of the most important relate to communications. Women and men on Mars need to be able to receive support from the Earth, communicate with other human beings on Earth and to send back the data collected. A reliable and continuous communication link has to be provided between Earth and Mars to ensure a safe journey to Mars. However, the communication between the Earth and Mars is challenging because of the distance between the two planets and because of the obstruction by the Sun that occurs for about 21 days every 780 days. Because of the cost of communication systems and the number of exploration missions to Mars, it has been established that a permanent communication architecture between the Earth and Mars is the most profitable option. From these observations, the research goal established for this thesis is to enable reliable and continuous communications between the Earth and Mars through the design of a permanent communication architecture. A literature review of the communication architectures between Earth and Mars revealed that a lot of concepts have been offered by different authors over the last thirty years. However, when investigating ways to compare the variety of existing architectures, it becomes very apparent that there were no robust, traceable and rigorous approach to do so. The comparisons made in the literature were incomplete. The requirements driving the design the architectures were not defined or quantified. The assumptions on which the comparisons are based were different from one architecture to another, and from one comparative study to another. As a result, all the comparisons offered were inconsistent. This thesis addresses those gaps by developing a methodology that enables relevant and consistent comparisons of Earth-Mars communication architectures and supports gap analysis. The methodology is composed of three steps. The first step consists in defining the requirements and organizing them to emphasize their interactions with the different parts of the communication system (the architecture, the hardware and the software). A study of the requirements for a deep-space communication architecture supporting manned missions is performed. A set of requirements is chosen for the present work. The requirements are mapped against the communication system. The second step consists in implementing and evaluating the architectures. To ensure the consistency, the repeatably and the transparency of the methodology developed, a unique approach enabling the assessment of all the architectures based on the same assumptions has to be provided. A framework is designed in a modeling and simulation environment for this purpose. The environment chosen for this thesis is the software Systems Tool Kit (STK) because of its capabilities. A survey of the existing architectures is performed, the metrics to evaluate the architectures are defined, and the architectures are evaluated. The third step of the methodology consists in ranking the alternatives for different weighting scenarios. Four weighting scenarios are selected to illustrate some interesting trades. The ranking of the architectures is performed through a decision-making algorithm, a Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The results from the different weighting scenarios are discussed. They underline the incompleteness of the comparisons performed in past studies, the lack of design space exploration for Earth-Mars communication architectures and the importance of the definition of the set of requirements when designing and comparing architectures. This research provides a transparent and repeatable methodology to rank and determine the best Earth-Mars communication architectures for a set of chosen requirements. It fills several gaps in the comparison of Earth-Mars communication architectures: the lack of definition of the requirements, the lack of a unique approach to implement and assess the architectures based on the same assumptions, and the lack of a process to compare all the architectures rigorously. Before the present research, there was no robust, consistent and rigorous means to rank and quantitatively compare the architectures. The methodology not only ranks but also quantitatively compares the architectures; it can quantifies the differences between architectures for an infinite number of scenarios. It has various capabilities including ranking Earth-Mars architectures based on a chosen set of requirements, performing gap analysis and sensitivities analysis on communication technologies and protocols, and performing design space exploration on architectures. The methodology developed is demonstrated on a restricted scope, it aims at being extended.
  • Item
    Development and validation of 3-D cloud fields using data fusion and machine learning techniques
    (Georgia Institute of Technology, 2018-12-11) Huguenin, Manon
    The impact of climate change is projected to significantly increase over the next decades. Consequently, gaining a better understanding of climate change and being able to accurately predict its effects are of the upmost importance. Climate change predictions are currently achieved using Global Climate Models (GCMs), which are complex representations of the major climate components and their interactions. However, these predictions present high levels of uncertainty, as illustrated by the very disparate results GCMs generate. According to the International Panel on Climate Change (IPCC), there is high confidence that such high levels of uncertainty are due to the way clouds are represented in climate models. Indeed, several cloud phenomena, such as the cloud-radiative forcing, are not well- modeled in GCMs because they rely on miscroscopic processes that, due to computational limitations, cannot be represented in GCMs. Such phenomena are instead represented through physically-motivated parameterizations, which lead to uncertainties in cloud representations. For these reasons, improving the parameterizations required for representing clouds in GCMs is a current focus of climate modeling research efforts. Integrating cloud satellite data into GCMs has been proved to be essential to the development and assessment of cloud radiative transfer parameterizations. Cloud-related data is captured by a variety of satellites, such as satellites from NASA’s afternoon constellation (also named the A-train), which collect vertical and horizontal data on the same orbital track. Data from the A-train has been useful to many studies on cloud prediction, but its coverage is limited. This is due to the fact that the sensors that collect vertical data have very narrow swaths, with a width as small as one kilometer. As a result, the area where vertical data exists is very limited, equivalent to a 1-kilometer-wide track. Thus, in order for satellite cloud data to be compared to global representations of clouds in GCMs, additional vertical cloud data has to be generated to provide a more global coverage. Consequently, the overall objective of this thesis is to support the validation of GCMs cloud representations through the generation of 3D cloud fields using cloud vertical data from space-borne sensors. This has already been attempted by several studies through the implementation of physics-based and similarity-based approaches. However, such studies have a number of limitations, such as the inability to handle large amounts of data and high resolutions, or the inability to account for diverse vertical profiles. Such limitations motivate the need for novel approaches in the generation of 3D cloud fields. For this purpose, efforts have been initiated at ASDL to develop an approach that leverages data fusion and machine learning techniques to generate 3D cloud field domains. Several successive ASDL-led efforts have helped shape this approach and overcome some of its challenges. In particular, these efforts have led to the development of a cloud predictive classification model that is based on decision trees and integrates atmospheric data to predict vertical cloud fraction. This model was evaluated against “on-track” cloud vertical data, and was found to have an acceptable performance. However, several limitations were identified in this model and the approach that led to it. First, its performance was lower when predicting lower-altitude clouds, and its overall performance could still be greatly improved. Second, the model had only been assessed at “on-track” locations, while the construction of data at “off-track” locations is necessary for generating 3D cloud fields. Last, the model had not been validated in the context of GCMs cloud representation, and no satisfactory level of model accuracy had been determined in this context. This work aims at overcoming these limitations by taking the following approach. The model obtained from previous efforts is improved by integrating additional, higher-accuracy data, by investigating the correlation within atmospheric predictors, and by implementing additional classification machine learning techniques, such as Random Forests. Then, the predictive model is performed at “off-track” locations, using predictors from NASA’s LAADS datasets. Horizontal validation of the computed profiles is performed against an existing dataset containing the Cloud Mask at the same locations. This leads to the generation of a coherent global 3D cloud fields dataset. Last, a methodology for validating this computed dataset in the context of GCMs cloud-radiative forcing representation is developed. The Fu-Liou code is implemented on sample vertical profiles from the computed dataset, and the output radiative fluxes are analyzed. This research significantly improves the model developed in previous efforts, as well validates the computed global dataset against existing data. Such validation demonstrates the potential of a machine learning-based approach to generate 3D cloud fields. Additionally, this research provides a benchmarked methodology to further validate this machine learning-based approach in the context of study. Altogether, this thesis contributes to NASA’s ongoing efforts towards improving GCMs and climate change predictions as a whole.
  • Item
    A methodology for conducting design trades related to advanced in-space assembly
    (Georgia Institute of Technology, 2018-12-07) Jara de Carvalho Vale de Almeida, Lourenco
    In the decades since the end of the Apollo program, manned space missions have been confined to Low Earth Orbit. Today, ambitious efforts are underway to return astronauts to the surface of the Moon, and eventually reach Mars. Technical challenges and dangers to crew health and well-being will require innovative solutions. The use of In-Space Assembly (ISA) can provide critical new capabilities, by freeing designs from the size limitations of launch vehicles. ISA can be performed using different strategies. The current state-of-the-art strategy is to dock large modules together. Future technologies, such as welding in space, will unlock more advanced strategies. Advanced assembly strategies deliver smaller component pieces to orbit in highly efficient packaging but require lengthy assembly tasks to be performed in space. The choice of assembly strategy impacts the cost and duration of the entire mission. As a rule, simpler strategies require more deliveries, increasing costs, while advanced strategies require more assembly tasks, increasing time. The effects of these design choices must be modeled in order to conduct design trades. A methodology to conduct these design trades is presented. It uses a model of the logistics involved in assembling a space system, including deliveries and assembly tasks. The model employs a network formulation, where the pieces of a structure must flow from their initial state to a final assembly state, via arcs representing deliveries and assembly tasks. By comparing solutions obtained under different scenarios, additional design trades can be performed. This methodology is applied to the case of an Artificial Gravity Space Station. Results for the assembly of this system are obtained for a baseline scenario and compared with results after varying parameters such as the delivery and storage capacity. The comparison reveals the sensitivities of the assembly process to each parameter and the benefits that can be gained from certain improvements, demonstrating the effectiveness of the methodology.
  • Item
    A nonparametric-based approach on the propagation of imprecise probabilities due to small datasets
    (Georgia Institute of Technology, 2018-04-25) Gao, Zhenyu
    Quantification of uncertainty (UQ) is typically done by the use of precise probabilities, which requires a very high level of precision and consistency of information for the uncertain sources, and is rarely available for actual engineering applications. For better accuracy in the UQ process, greater flexibility in accommodating distributions for uncertain sources is needed to base inferences on weaker assumptions and avoid introducing unwarranted information. Latest literatures proposed a parametric-based approach for the propagation of uncertainty created by lack of sufficient statistical data, yet still has some notable limitations and constraints. This work proposes a nonparametric-based approach that facilitates the propagation of uncertainty in the small dataset case. The first part of this work uses Kernel Density Estimation (KDE) and Bootstrap to estimate the probability density function of a random variable based on small datasets. As a result, two types of sampling densities for propagating uncertainty are generated: an optimal sampling density representing the best estimate of the true density, and a maximum variance density representing risk and uncertainty that is inherent in small datasets. The second part extends the first part, to generate two-dimensional nonparametric density estimates and capture dependencies among variables. After a process to confirm the correlation among the variables based on small datasets, Copulas and the Sklar's Theorem are used to link the marginal nonparametric densities and create joint densities. By propagating the joint densities for dependent variables, researchers can prevent uncertainty in the outputs from being underestimated or overestimated. The effectiveness of the nonparametric density estimation methods is tested by selected test cases with different statistical characteristics. A complete uncertainty propagation test through a complex systems model is also conducted. Finally, the nonparametric-based methods developed in this thesis are applied to a challenging problem in aviation environmental impact analysis.
  • Item
    Compressor conceptual design optimization
    (Georgia Institute of Technology, 2015-04-27) Miller, Andrew Scott
    Gas turbine engines are conceptually designed using performance maps that describe the compressor’s effect on the cycle. During the traditional design process, the cycle designer selects a compressor design point based on criteria to meet cycle design point requirements, and performance maps are found or created for off-design analysis that meet this design point selection. Although the maps always have a pedigree to an existing compressor design, oftentimes these maps are scaled to account for design or technology changes. Scaling practices disconnect the maps from the geometry and flow associated with the reference compressor, or the design parameters which are needed for compressor preliminary design. A goal in gas turbine engine research is to bridge this disconnect in order to produce acceptable performance maps that are coupled with compressor design parameters. A new compressor conceptual design and performance prediction method has been developed which will couple performance maps to conceptual design parameters. This method will adapt and combine the key elements of compressor conceptual design with multiple-meanline analysis, allowing for a map of optimal performance that is attached to reasonable design parameters to be defined for cycle design. This method is prompted by the development of multi-fidelity (zooming) analysis capabilities, which allow compressor analysis to be incorporated into cycle analysis. Integrating compressor conceptual design and map generation into cycle analysis will allow for more realistic decisions to be made sooner, which will reduce the time and cost used for design iterations.
  • Item
    A methodology for capturing the impacts of bleed flow extraction on compressor performance and operability in engine conceptual design
    (Georgia Institute of Technology, 2015-04-23) Brooks, Joshua Daniel
    The commercial aviation industry continually faces the challenge of reducing fuel consumption for the next generation of aircraft. This challenge rests largely on the shoulders of engine design teams, who push the boundaries of the traditional design paradigm in pursuit of more fuel efficient, cost effective, and environmentally clean engines. In order to realize these gains, there is a heightened requirement of accounting for engine system and subsystem level impacts from a wide range of variables, earlier in the design process than ever before. One of these variables, bleed flow extraction, or simply bleed, plays an especially greater role; due to the approach engine designers are taking to combat the current state of fuel efficiency. For this reason, this research examined the current state of bleed handling performed during the engine conceptual design process, questioned its adequacy with regards to properly capturing the impacts of this mechanism, and developed a bleed handling methodology designed to replace the existing method. The traditional method of handling bleed in the engine cycle design stage relies on a variety of engine level impacts stemming from zero dimensional thermodynamic analysis, as well as the utilization of a static performance characterization of the engine compression component, the axial flow compressor. The traditional method operates under the assumption that the introduction of additional bleed to the compressor system has created no additional compressor level impact. The methodology developed in this work challenges this assumption in two parts, first by creating a way to evaluate the compressor level impacts caused by the introduction of bleed, and second by implementing the knowledge gained from this compressor level evaluation into the engine cycle design, where the engine level impacts could be compared to those predicted by the traditional method of bleed handling. The compressor level impacts from the addition of bleed were quantified using a low fidelity, multi-stream, meanline analysis. Here, an innovative approach was developed which cross pollinated existing methods used elsewhere in the analysis environment, to account for the bleed impact in the object oriented modeling environment. Implementation of this approach revealed that the addition of bleed negatively and significantly impacts the compressor level performance and operability. With the completion of the above analyses, this newly acquired capability to quantify, or at least qualify, the compressor level bleed impacts was tied into the engine level cycle analysis. This form of component zooming, allows the user to update the bleed flow rate from a number of locations along the compressor, as well as the compressor variable stator vain orientation, within the existing cycle analysis. Utilization of this ability provided engine level performance and operability analyses which revealed a disparity between the traditional and herein developed bleed handling methodology’s predictions. The found results reveal a need for more stringent handling of bleed during the engine conceptual design than the traditional method provides, and suggests that the developed methodology provides a positive step to the realization of this need.
  • Item
    Aerothermodynamic cycle design and optimization method for aircraft engines
    (Georgia Institute of Technology, 2014-08-22) Ford, Sean T.
    This thesis addresses the need for an optimization method which can simultaneously optimize and balance an aerothermodynamic cycle. The method developed is be able to control cycle design variables at all operating conditions to meet the performance requirements while controlling any additional variables which may be used to optimize the cycle and maintaining all operating limits and engine constraints. The additional variables represent degrees of freedom above what is needed for conservation of mass and energy in the engine system. The motivation for such a method is derived from variable cycle engines, however it is general enough to use with most engine architectures. The method is similar to many optimization algorithms but differs in its implementation to an aircraft engine by combining the cycle balance and optimization using a Newton-Raphson cycle solver to efficiently find cycle designs for a wide range of engine architectures with extra degrees of freedom not needed to balance the cycle. Combination of the optimization with the cycle solver greatly speeds up the design and optimization process. A detailed process description for implementation of the method is provided as well as a proof of concept using several analytical test functions. Finally, the method is demonstrated on a separate flow turbofan model. Limitations and applications of the method are further explored including application to a multi-design point methodology.
  • Item
    A combined global and local methodology for launch vehicle trajectory design-space exploration and optimization
    (Georgia Institute of Technology, 2014-04-09) Steffens, Michael J.
    Trajectory optimization is an important part of launch vehicle design and operation. With the high costs of launching payload into orbit, every pound that can be saved increases affordability. One way to save weight in launch vehicle design and operation is by optimizing the ascent trajectory. Launch vehicle trajectory optimization is a field that has been studied since the 1950’s. Originally, analytic solutions were sought because computers were slow and inefficient. With the advent of computers, however, different algorithms were developed for the purpose of trajectory optimization. Computer resources were still limited, and as such the algorithms were limited to local optimization methods, which can get stuck in specific regions of the design space. Local methods for trajectory optimization have been well studied and developed. Computer technology continues to advance, and in recent years global optimization has become available for application to a wide variety of problems, including trajectory optimization. The aim of this thesis is to create a methodology that applies global optimization to the trajectory optimization problem. Using information from a global search, the optimization design space can be reduced and a much smaller design space can be analyzed using already existing local methods. This allows for areas of interest in the design space to be identified and further studied and helps overcome the fact that many local methods can get stuck in local optima. The design space included in trajectory optimization is also considered in this thesis. The typical optimization variables are initial conditions and flight control variables. For direct optimization methods, the trajectory phase structure is currently chosen a priori. Including trajectory phase structure variables in the optimization process can yield better solutions. The methodology and phase structure optimization is demonstrated using an earth-to-orbit trajectory of a Delta IV Medium launch vehicle. Different methods of performing the global search and reducing the design space are compared. Local optimization is performed using the industry standard trajectory optimization tool POST. Finally, methods for varying the trajectory phase structure are presented and the results are compared.
  • Item
    Numerical simulation of ice accretion on 3-D rotor blades
    (Georgia Institute of Technology, 2014-03-12) Wing, Eliya
    Rotorcraft vehicles are highly sensitive to ice accretion. When ice forms on helicopter rotor blades, performance degradation ensues due to a loss of lift and rise in drag. The presence of ice increases torque, power required, and leads to rotor vibrations. Due to these undesirable changes in the vehicle's performance, the FAA requires intensive certification to determine the helicopter’s airworthiness in icing conditions. Since flight tests and icing tunnel tests are very expensive and cannot simulate all conditions required for certification, it is becoming necessary to use computational solvers to model ice growth and subsequent performance degradation. Currently, most solvers use the strip theory approach for 3D shapes. However, rotor blades can experience significant span-wise flow from separation or centrifugal forces. The goal of this work is to investigate the influence of span-wise flow on ice accretion. The classical strip theory approach is compared to a curved surface streamline based approach to assess the relative differences in ice formation.
  • Item
    A parametric and physics-based approach to structural weight estimation of the hybrid wing body aircraft
    (Georgia Institute of Technology, 2012-08-28) Laughlin, Trevor William
    Estimating the structural weight of a Hybrid Wing Body (HWB) aircraft during conceptual design has proven to be a significant challenge due to its unconventional configuration. Aircraft structural weight estimation is critical during the early phases of design because inaccurate estimations could result in costly design changes or jeopardize the mission requirements and thus degrade the concept's overall viability. The tools and methods typically employed for this task are inadequate since they are derived from historical data generated by decades of tube-and-wing style construction. In addition to the limited applicability of these empirical models, the conceptual design phase requires that any new tools and methods be flexible enough to enable design space exploration without consuming a significant amount of time and computational resources. This thesis addresses these challenges by developing a parametric and physics-based modeling and simulation (M&S) environment for the purpose of HWB structural weight estimation. The tools in the M&S environment are selected based on their ability to represent the unique HWB geometry and model the physical phenomena present in the centerbody section. The new M&S environment is used to identify key design parameters that significantly contribute to the variability of the HWB centerbody structural weight and also used to generate surrogate models. These surrogate models can augment traditional aircraft sizing routines and provide improved structural weight estimations.