Series
Master of Science in Aerospace Engineering

Series Type
Degree Series
Description
Associated Organization(s)
Associated Organization(s)

Publication Search Results

Now showing 1 - 10 of 125
  • Item
    Hypersonic shape parameterization using class – shape transformation with stagnation point heat flux
    (Georgia Institute of Technology, 2019-05-01) Fan, Justin
    In recent years, hypersonics is undergoing a major resurgence that is primarily driven by domestic and foreign militaries to have an advanced and unchallenged weapon system. China and Russia have tested hypersonic systems, and the United States is pushing to match and exceed adversarial capabilities. While the concept of hypersonic vehicles is not a recently conceived concept, it has experienced turbulent progress throughout the decades. Hypersonic vehicles are inherently complex vehicles to design due to intricate couplings between design disciplines: aerodynamics, aerodynamic heating, trajectory, structures, and controls. As computational analysis tools in these disciplines have progressed, the geometries and vehicles must progress as well. For aerodynamic purposes, hypersonic vehicles often contain sharp leading-edges to achieve high lift-to-drag properties. However, the use of sharp leading edges at hypersonic velocities also results in severe aerodynamic heating. The severe aerodynamic heating can lead to the destruction of materials and the entire vehicle, as was the case in the Space Shuttle Challenger accident. The aerodynamic heating, specifically the stagnation point heat flux, has been found to be directly related to the leading-edge radius of a given shape. The purpose of this thesis is to implement the shape parameterization method known as the class-shape transformation (CST) method with stagnation point heat flux. The CST method is a proven method in research where geometries can be optimized in aerodynamics to obtain maximum lift-to-drag ratio (L/D). Instead of taking a shape and having to perform time-consuming analyses to determine the leading-edge heat flux, an initial geometry can be determined with approximate hypersonic operating conditions. The objective of this research is to 1) leverage a parametric shaping modeling method to generate geometries that 2) incorporates an aspect of hypersonic aerodynamic heating effects on the geometry and 3) optimize the new geometry for maximum aerodynamic efficiency.
  • Item
    A methodology for sequential low thrust trajectory optimization using prediction models derived from machine learning techniques
    (Georgia Institute of Technology, 2019-04-30) Casey, John Alexander
    Spacecraft trajectory sequence optimization has been a well-known problem for many years. Difficulty in finding adequate solutions arises from the combinatorial explosion of possible sequences to evaluate, as well as complexity of the underlying physics. Since there typically exists only minuscule amounts of acceptable solutions to the problem, a large search of the solution space must be conducted to find good sequences. Low thrust trajectories are of particular interest in this field due to the significant increase in efficiency that low thrust propulsion methods offer. Unfortunately, in the case of low thrust trajectory problems, calculations of the cost of these trajectories is computationally expensive, so estimates are used to restrict the search space before fully solving the trajectory during the mission planning process. However, these estimates, such as Lambert solvers, have been shown to be poor estimators of low thrust trajectories. Recent work has shown that machine learning regression techniques can be trained to accurately predict fuel consumption for low thrust trajectories between orbits. These prediction models provide an order of magnitude increase in accuracy over Lambert solvers while retaining a fast computational speed. In this work, a methodology is developed for integration of these machine learning techniques into a trajectory sequence optimization technique. First a set of training data composed of low thrust trajectories is produced using a Sims-Flanagan solver. Next, this data is used to train regression and classification models that respectively predict the final mass of a spacecraft after a low thrust transfer and predict the feasibility of a transfer. Two machine learning techniques were used: Gradient boosting and artificial neural networks. These predictors are then integrated into a sequence evaluation evaluation scheme that scores a sequence of targets to visit according to the prediction models. This serves as the objective function of the global optimizer. Finally, this objective function is integrated into a Genetic Algorithm that optimizes sequences of targets to visit. Since the objective function of this algorithm uses predictions to score sequences, the final sequence is evaluated by a Sims-Flanagan low thrust trajectory solver to evaluate the efficacy of the method. Additionally, a comparison is made between the global optimization results with two different objective functions: One based that score sequences using the machine learning predictors, and one that uses Lambert solvers to score sequences. This allows for a measurement of the this method's improvement in the global optimization results. Results of this work demonstrate that the developed methodology provides a significant improvement in the quality of sequences produced by the Genetic Algorithm when paired with the machine learning predictor based objective function. Both gradient boosting and artificial neural networks are shown to be accurate predictors of both the fuel usage and feasibility of low thrust trajectories between orbits. However, gradient boosting is found to offer improved results when evaluating sequences of targets to visit. When paired with the Genetic Algorithm global optimizer, both the gradient boosting prediction model and the artificial neural network model produce similar results. Both are shown to offer a significant improvement over the Lambert solver based objective function while maintaining similar speeds. The positive results this methodology yields lends support to the notion that the use of machine learning techniques has the potential to improve the optimization of sequences of low thrust trajectories. This work lays down a framework that can be applied to preliminary mission planning for space missions outfitted with low thrust propulsion methods. Such missions include, but are not limited to, multiple main-belt asteroid rendezvous, debris removal from Earth orbit, or an interplanetary tour of the solar system.
  • Item
    Ensuring pedestrian safety on campus through the use of computer vision
    (Georgia Institute of Technology, 2019-04-26) Commun, Domitille Marie, France
    In the United States alone, 5,987 pedestrians were killed and 70,000 injured in 2016 and 2015 respectively. Those numbers are of particular concern to universities where traffic accidents and incidents represent one of the main causes of injuries on campuses. On the Georgia Tech Campus, the growth of the population-to-infrastructure ratio, the emergence of new transportation systems, and the increase in the number of distractions have shown to have an impact on pedestrian safety. One means to ensure safety and fast responses to incidents on campus is through video surveillance. However, identifying risky situations for pedestrians from video cameras and feeds require significant human efforts. Computer vision and other image processing methods applied to videos may provide the means to reduce the cost and human errors associated with processing images. Computer vision in particular provides techniques that enable artificial systems to obtain information from images. While many vendors provide computer vision and image recognition capabilities, additional efforts and tools are needed to support 1) the mission of the Georgia Tech Police Department and 2) the identification of solutions or practices that would lead to improved pedestrian safety on campus. Data from cameras can be systematically and automatically analyzed to provide improved situational awareness and help to automate and better inform enforcement operations, identify conflict situations including pedestrians and provide calibration data to optimize traffic light control. In particular, this thesis aims at developing an intelligent system that automates data collection about incidents around campus and attempts to optimize traffic light control. This is achieved by: 1) Leveraging computer vision techniques such as object detection algorithms to identify and characterize conflict situations including pedestrians. Computer vision techniques were implemented to detect and track pedestrians and vehicles on surveillance videos. Once trajectories were extracted from videos, additional data such as speed, collisions and vehicle and pedestrian flows were determined. Such data can be used by the Georgia Tech Police Department to determine needs for agents to manage traffic at a given intersection. Speed information is used to detect speeding automatically, which can help to enforce law in an automated way. Traffic and walking light color detection algorithms were implemented and combined with location data to detect jaywalking and red light running. The conflict situations detected were stored in a database which completes the Police record database. The data is structured such as to enable statistics or the detection of patterns with improved processing time. Hence, the tool built in this thesis provides structured information about violations and dangerous situations around campus. This data can be used by the Police Department to automate law enforcement and issue citations automatically and to determine the needs for countermeasures to ensure pedestrian safety. 2) Implementing a simple optimized traffic light control system and setting up the inputs necessary for a an improved optimization of traffic light control using reinforcement learning. It is expected that the improved situational awareness and information gained from developing these capabilities will contribute to help reduce the number of collisions, the amount of dangerous jaywalking, and lead to new ways to ensure pedestrian safety on campus
  • Item
    Machine learning regression for estimating characteristics of low-thrust transfers
    (Georgia Institute of Technology, 2019-04-26) Chen, Gene Lamar
    In this thesis, a methodology for training machine learning algorithms to predict the fuel and time costs of low-thrust trajectories between two objects is developed. In order to demonstrate the methodology, experiments and hypotheses were devised. The first experiment identified that a direct method was more efficient than an indirect method for solving low-thrust trajectories. The second experiment, an offshoot of the first, found that the Sims-Flanagan method as implemented in the Python library PyKEP would be the most efficient manner of creating the training data. The training data consisted of the orbital elements of both the departure and arrival bodies, as well as the fuel and time-of-flight associated with a transfer between those bodies. A total of 7,218 transfers made up the training data. After creating the training data, the third and final experiment could be conducted, to see if machine learning methods could accurately predict fuel and time costs of low-thrust trajectory for a larger design space that had been investigated in previous literature. As such, the training data consisted of transfers, generated using a space-filling Latin Hypercube design of experiments, between bodies of highly varying orbital elements. The departure and arrival bodies’ semimajor axis and inclination differ much more than in previous literature. It was found that all the machine learning regression methods analyzed greatly outperformed the Lambert predictor, a predictor based on the impulsive thrust assumption. The accuracy of the time-of-flight prediction was close to that of the mass prediction when considering the mean absolute error of the expended propellant mass.
  • Item
    System identification of a general aviation aircraft using a personal electronic device
    (Georgia Institute of Technology, 2019-04-25) Nothem, Michael
    System Identification (SysID) is the process of obtaining a model of system dynamics by analyzing measurement data. SysID is often used in flight testing to obtain or refine estimates for aircraft stability and control derivatives and performance. Recent applications have shown that SysID can also be used to monitor and update models of dynamics and performance during routine operations. General Aviation (GA) continues to see higher accident rates than other aviation sectors. To combat this, research into accident mitigation strategies, especially loss of control (LOC) accidents, has led to the development of energy-based or envelope-based safety metrics that can be used to monitor and improve the safety and efficiency of GA operations. However, these methods depend on the existence of an accurate aircraft model to predict the performance and dynamics of the aircraft. The diversity of the aging GA fleet has established the need to calibrate existing models using flight data. SysID therefore has the potential to improve these methods by monitoring and updating aircraft models for each individual GA aircraft. Any SysID process depends on the type and quality of measurement data available as well as the nature of the aircraft model (what parameters are being identified) and the method of SysID being used. As opposed to flight test SysID, availability of flight data can be limited in GA. However, flight data recording using Personal Electronic Devices (PEDs) or low-cost Flight Data Recorders (FDRs) is becoming common. The capabilities of SysID methods using data from these devices has yet to be explored. This work demonstrates a process for evaluating SysID techniques for GA aircraft using data from a PED. A simulator environment was created that allowed testing of a variety of SysID and estimation methods. An observability condition was developed and used to inform decisions regarding model parameters and necessary assumptions. The results of this process provide a proof for existence and uniqueness of a solution to the minimization problem that SysID aims to solve. Local observability and global identifiability were also used to divide the “blind” SysID process into two estimations: an online estimation of aircraft states and unknown controls, and an offline identification of model parameters. Two SysID methods were then compared: Output Error Method (OEM), and Filter Method using an Extended Kalman Filter (EKF). It was shown that OEM outperformed EKF at the expense of increased computational burden. Potential improvements to both OEM and EKF SysID in this context are discussed. However, using OEM resulted in improved estimates of performance and dynamics over an assumed a priori model. These improvements were robust to both sensor quality and assumptions in the model, therefore demonstrating the potential of SysID using PED data to improve GA safety and efficiency.
  • Item
    Predicting the occurrence of ground delay programs and their impact on airport and flight operations
    (Georgia Institute of Technology, 2019-04-25) Mangortey, Eugene
    A flight is delayed when it arrives 15 or more minutes later than scheduled. Delays attributed to the National Airspace System are one of the most common delays and can be caused by the initiation of Traffic Management Initiatives (TMI) such as Ground Delay Programs (GDP). A Ground Delay Program is implemented to control air traffic volume to an airport over a lengthy period when traffic demand is projected to exceed the airport's acceptance rate due to conditions such as inclement weather, volume constraints, closed runways or equipment failures. Ground Delay Programs cause flight delays which affect airlines, passengers, and airport operations. Consequently, various efforts have been made to reduce the impacts of Ground Delay Programs by predicting their occurrence or the optimal time for initiating Ground Delay Programs. However, a few research gaps exist. First, most of the previous efforts have focused on only weather-related Ground Delay Programs, ignoring other causes such as volume constraints and runway-related incidents. Second, there has been limited benchmarking of Machine Learning techniques to predict the occurrence of Ground Delay Programs. Finally, little to no work has been conducted to predict the impact of Ground Delay Programs on flight and airport operations such as their duration, flight delay times, and taxi-in time delays. This research addresses these gaps by 1) fusing data from a variety of datasets (Traffic Flow Management System (TFMS), Aviation System Performance Metrics (ASPM), and Automated Surface Observing Systems (ASOS)) and 2) leveraging and benchmarking Machine Learning techniques to develop prediction models aimed at reducing the impacts of Ground Delay Programs on flight and airport operations. These models predict 1) flight delay times due to a Ground Delay Program, 2) the duration of a Ground Delay Program, 3) the impact of a Ground Delay Program on taxi-in time delays, and 4) the occurrence of Ground Delay Programs. Evaluation metrics such as Mean Absolute Error, Root mean Squared Error, Correlation, and R-square revealed that Random Forests was the optimal Machine Learning technique for predicting flight delay times due to Ground Delay Programs, the duration of Ground Delay Programs, and taxi-in time delays during a Ground Delay Program. On the other hand, the Kappa Statistic revealed that Boosting Ensemble was the optimal Machine learning technique for predicting the occurrence of Ground Delay Programs. The aforementioned prediction models may help airlines, passengers, and air traffic controllers to make more informed decisions which may lead to a reduction in Ground Delay Program related-delays and their impacts on airport and flight operations.
  • Item
    Application of data fusion and machine learning to the analysis of the relevancy of recommended flight reroutes
    (Georgia Institute of Technology, 2019-04-24) Dard, Ghislain
    One of the missions of the Federal Aviation Administration (FAA) is to maintain the safety and efficiency of the National Airspace System (NAS). One way to do so is through Traffic Management Initiatives (TMIs). TMIs, such as reroute advisories, are issued by Air Traffic Controllers whenever there is a need to balance demand with capacity in the National Airspace System. Indeed, rerouting flights ensures that aircraft comply with the air traffic flow, remain away from special use airspace, and avoid saturated areas of the airspace and areas of inclement weather. Reroute advisories are defined by their level of urgency i.e. Required, Recommended or For Your Information (FYI). While pilots almost always comply with required reroutes, their decisions to follow recommended reroutes vary. Understanding the efficiency and relevance of recommended reroutes is key to the identification and definition of future reroute options. Similarly, being able to predict the issuance of volume-related reroute advisories would be of value to airlines and Air Traffic Controller (ATC). Consequently, the objective of this work was two-fold: 1) Assess the relevancy of existing recommended reroutes, and 2) predict the issuance and the type of volume-related reroute advisories. The first objective has been fulfilled by fusing relevant datasets and developing flights compliance metrics and algorithms to assess the compliance of flights to recommended reroutes. The second objective has been fulfilled by fusing traffic data and reroute advisories and then benchmarking Machine Learning techniques to identify the one that performed the best.
  • Item
    A methodology to support relevant comparisons of Earth-Mars communication architectures
    (Georgia Institute of Technology, 2018-12-11) Duveiller, Florence B.
    Because of the human imperative for exploration, it is very likely that a manned mission to Mars occurs by the end of the century. Mars is one of the two closest planets to Earth. It is very similar to the Earth and could be suitable to host a manned settlement. Sending humans to Mars is a technological challenge above all. Among the technologies needed, some of the most important relate to communications. Women and men on Mars need to be able to receive support from the Earth, communicate with other human beings on Earth and to send back the data collected. A reliable and continuous communication link has to be provided between Earth and Mars to ensure a safe journey to Mars. However, the communication between the Earth and Mars is challenging because of the distance between the two planets and because of the obstruction by the Sun that occurs for about 21 days every 780 days. Because of the cost of communication systems and the number of exploration missions to Mars, it has been established that a permanent communication architecture between the Earth and Mars is the most profitable option. From these observations, the research goal established for this thesis is to enable reliable and continuous communications between the Earth and Mars through the design of a permanent communication architecture. A literature review of the communication architectures between Earth and Mars revealed that a lot of concepts have been offered by different authors over the last thirty years. However, when investigating ways to compare the variety of existing architectures, it becomes very apparent that there were no robust, traceable and rigorous approach to do so. The comparisons made in the literature were incomplete. The requirements driving the design the architectures were not defined or quantified. The assumptions on which the comparisons are based were different from one architecture to another, and from one comparative study to another. As a result, all the comparisons offered were inconsistent. This thesis addresses those gaps by developing a methodology that enables relevant and consistent comparisons of Earth-Mars communication architectures and supports gap analysis. The methodology is composed of three steps. The first step consists in defining the requirements and organizing them to emphasize their interactions with the different parts of the communication system (the architecture, the hardware and the software). A study of the requirements for a deep-space communication architecture supporting manned missions is performed. A set of requirements is chosen for the present work. The requirements are mapped against the communication system. The second step consists in implementing and evaluating the architectures. To ensure the consistency, the repeatably and the transparency of the methodology developed, a unique approach enabling the assessment of all the architectures based on the same assumptions has to be provided. A framework is designed in a modeling and simulation environment for this purpose. The environment chosen for this thesis is the software Systems Tool Kit (STK) because of its capabilities. A survey of the existing architectures is performed, the metrics to evaluate the architectures are defined, and the architectures are evaluated. The third step of the methodology consists in ranking the alternatives for different weighting scenarios. Four weighting scenarios are selected to illustrate some interesting trades. The ranking of the architectures is performed through a decision-making algorithm, a Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The results from the different weighting scenarios are discussed. They underline the incompleteness of the comparisons performed in past studies, the lack of design space exploration for Earth-Mars communication architectures and the importance of the definition of the set of requirements when designing and comparing architectures. This research provides a transparent and repeatable methodology to rank and determine the best Earth-Mars communication architectures for a set of chosen requirements. It fills several gaps in the comparison of Earth-Mars communication architectures: the lack of definition of the requirements, the lack of a unique approach to implement and assess the architectures based on the same assumptions, and the lack of a process to compare all the architectures rigorously. Before the present research, there was no robust, consistent and rigorous means to rank and quantitatively compare the architectures. The methodology not only ranks but also quantitatively compares the architectures; it can quantifies the differences between architectures for an infinite number of scenarios. It has various capabilities including ranking Earth-Mars architectures based on a chosen set of requirements, performing gap analysis and sensitivities analysis on communication technologies and protocols, and performing design space exploration on architectures. The methodology developed is demonstrated on a restricted scope, it aims at being extended.
  • Item
    Development and validation of 3-D cloud fields using data fusion and machine learning techniques
    (Georgia Institute of Technology, 2018-12-11) Huguenin, Manon
    The impact of climate change is projected to significantly increase over the next decades. Consequently, gaining a better understanding of climate change and being able to accurately predict its effects are of the upmost importance. Climate change predictions are currently achieved using Global Climate Models (GCMs), which are complex representations of the major climate components and their interactions. However, these predictions present high levels of uncertainty, as illustrated by the very disparate results GCMs generate. According to the International Panel on Climate Change (IPCC), there is high confidence that such high levels of uncertainty are due to the way clouds are represented in climate models. Indeed, several cloud phenomena, such as the cloud-radiative forcing, are not well- modeled in GCMs because they rely on miscroscopic processes that, due to computational limitations, cannot be represented in GCMs. Such phenomena are instead represented through physically-motivated parameterizations, which lead to uncertainties in cloud representations. For these reasons, improving the parameterizations required for representing clouds in GCMs is a current focus of climate modeling research efforts. Integrating cloud satellite data into GCMs has been proved to be essential to the development and assessment of cloud radiative transfer parameterizations. Cloud-related data is captured by a variety of satellites, such as satellites from NASA’s afternoon constellation (also named the A-train), which collect vertical and horizontal data on the same orbital track. Data from the A-train has been useful to many studies on cloud prediction, but its coverage is limited. This is due to the fact that the sensors that collect vertical data have very narrow swaths, with a width as small as one kilometer. As a result, the area where vertical data exists is very limited, equivalent to a 1-kilometer-wide track. Thus, in order for satellite cloud data to be compared to global representations of clouds in GCMs, additional vertical cloud data has to be generated to provide a more global coverage. Consequently, the overall objective of this thesis is to support the validation of GCMs cloud representations through the generation of 3D cloud fields using cloud vertical data from space-borne sensors. This has already been attempted by several studies through the implementation of physics-based and similarity-based approaches. However, such studies have a number of limitations, such as the inability to handle large amounts of data and high resolutions, or the inability to account for diverse vertical profiles. Such limitations motivate the need for novel approaches in the generation of 3D cloud fields. For this purpose, efforts have been initiated at ASDL to develop an approach that leverages data fusion and machine learning techniques to generate 3D cloud field domains. Several successive ASDL-led efforts have helped shape this approach and overcome some of its challenges. In particular, these efforts have led to the development of a cloud predictive classification model that is based on decision trees and integrates atmospheric data to predict vertical cloud fraction. This model was evaluated against “on-track” cloud vertical data, and was found to have an acceptable performance. However, several limitations were identified in this model and the approach that led to it. First, its performance was lower when predicting lower-altitude clouds, and its overall performance could still be greatly improved. Second, the model had only been assessed at “on-track” locations, while the construction of data at “off-track” locations is necessary for generating 3D cloud fields. Last, the model had not been validated in the context of GCMs cloud representation, and no satisfactory level of model accuracy had been determined in this context. This work aims at overcoming these limitations by taking the following approach. The model obtained from previous efforts is improved by integrating additional, higher-accuracy data, by investigating the correlation within atmospheric predictors, and by implementing additional classification machine learning techniques, such as Random Forests. Then, the predictive model is performed at “off-track” locations, using predictors from NASA’s LAADS datasets. Horizontal validation of the computed profiles is performed against an existing dataset containing the Cloud Mask at the same locations. This leads to the generation of a coherent global 3D cloud fields dataset. Last, a methodology for validating this computed dataset in the context of GCMs cloud-radiative forcing representation is developed. The Fu-Liou code is implemented on sample vertical profiles from the computed dataset, and the output radiative fluxes are analyzed. This research significantly improves the model developed in previous efforts, as well validates the computed global dataset against existing data. Such validation demonstrates the potential of a machine learning-based approach to generate 3D cloud fields. Additionally, this research provides a benchmarked methodology to further validate this machine learning-based approach in the context of study. Altogether, this thesis contributes to NASA’s ongoing efforts towards improving GCMs and climate change predictions as a whole.
  • Item
    Computational fluid dynamics simulation of three-dimensional parallel jets
    (Georgia Institute of Technology, 2018-12-11) Liu, Zhihang
    High-speed air jets are often used in industry for manufacturing thin fibers through a process known as melt-blowing (MB). In melt blowing, high-velocity gas streams impinge upon molten strands of polymer to produce fine filaments. For a very high quantity of fibers to be produced, many small-scale jets placed side by side are needed, these jets draw the air from the same compressed air storage tank, so the fiber formation is critically dependent on the aerodynamics of the impingement jet flow field. However, the real-word MB devices always have complicate internal structures such as mixing chambers and air channels between air tank and die tip, which may cause instability and cross flow in the jet flow filed and had a significant impact on the formation of fibers and non-woven webs with small scale jets. The purpose of this study was inspired by the necessity to understand the effect of the internal geometry on the jet flow filed and tried to prevent the flow instability with fluctuation reduction devices. The MB process in this study was modeled as a pair of two jets placed at an angle of approximately 60 degrees to each other, and when there are many such jet pairs, a stream so that multiple streams of fibers may be simultaneously produced. All internal structures of the MB device were modeled based on US Patent 6,972,104 B2 by Haynes et al. The flow field resulting from the two similar converging-plane jet nozzles was investigated using a computational fluid dynamics approach. The case in which there are flow fluctuation reduction devices installed and the case without the devices installed were studied. The k-ω turbulence model was used, and the model parameters were calculated according to the inlet conditions of the air flow. This study consists of three parts: (a) a baseline case without any flow fluctuation reduction devices was studied to understand the mechanism of the instability and to investigate the details of the internal flow filed; (b) a wired mesh screen was placed between the air plates and the die tip, to study the effect on both the velocity and pressure distribution across the screen; (c) a honeycomb installed near the exit of last mixing chamber trying to reduce the velocity across the flow direction and turbulent intensity. Finally, the effect of the two different flow fluctuation reduction devices was compared in detail using time series measurements and time average flow contours.