Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 13
Thumbnail Image
Item

Architecting aircraft power distribution systems via redundancy allocation

2014-11-17 , Campbell, Angela Mari

Recently, the environmental impact of aircraft and rising fuel prices have become an increasing concern in the aviation industry. To address these problems, organizations such as NASA have set demanding goals for reducing aircraft emissions, fuel burn, and noise. In an effort to reach the goals, a movement toward more-electric aircraft and electric propulsion has emerged. With this movement, the number of critical electrical loads on an aircraft is increasing causing power system reliability to be a point of concern. Currently, power system reliability is maintained through the use of back-up power supplies such as batteries and ram-air-turbines (RATs). However, the increasing power requirements for critical loads will quickly outgrow the capacity of the emergency devices. Therefore, reliability needs to be addressed when designing the primary power distribution system. Power system reliability is a function of component reliability and redundancy. Component reliability is often not determined until detailed component design has occurred; however, the amount of redundancy in the system is often set during the system architecting phase. In order to meet the capacity and reliability requirements of future power distribution systems, a method for redundancy allocation during the system architecting phase is needed. This thesis presents an aircraft power system design methodology that is based upon the engineering decision process. The methodology provides a redundancy allocation strategy and quantitative trade-off environment to compare architecture and technology combinations based upon system capacity, weight, and reliability criteria. The methodology is demonstrated by architecting the power distribution system of an aircraft using turboelectric propulsion. The first step in the process is determining the design criteria which includes a 40 MW capacity requirement, a 20 MW capacity requirement for the an engine-out scenario, and a maximum catastrophic failure rate of one failure per billion flight hours. The next step is determining gaps between the performance of current power distribution systems and the requirements of the turboelectric system. A baseline architecture is analyzed by sizing the system using the turboelectric system power requirements and by calculating reliability using a stochastic flow network. To overcome the deficiencies discovered, new technologies and architectures are considered. Global optimization methods are used to find technology and architecture combinations that meet the system objectives and requirements. Lastly, a dynamic modeling environment is constructed to study the performance and stability of the candidate architectures. The combination of the optimization process and dynamic modeling facilitates the selection of a power system architecture that meets the system requirements and objectives.

Thumbnail Image
Item

A methodology for evaluating fleet implications of mission specification changes

2014-11-17 , Brett, Paul S.

Civil aviation has matured to become a vital piece of the global economy, providing the rapid movement of goods and people to all regions. This has already led to significant growth and expectations of further growth are on the rate of 5% per year. Given the high projected rate of growth, environmental consequences of commercial aviation are expected to rise. To mitigate the increase of noise and emissions, governing bodies such as ICAO and the FAA have established and are considering additional regulation of noise, NOₓ, and CO₂ while the European Union has integrated aviation into their Environmental Trading Scheme. The traditional response to new regulation is to integrate technologies into the aircraft to reduce environmental footprint. While these benefits are positive on the aircraft level, fleet growth is projected to outpace benefits provided by technology alone. To further reduce environmental footprint, a number of mitigation strategies are being explored to determine the impact. One of those strategies involves changing the mission specifications of today's aircraft by reducing range, speed, or payload in an effort to reduce fuel consumption and has been predominantly focused at the vehicle level. This research proposes an approach that evaluates mission specification changes from the aircraft design level up to the fleet level, forecasted into the future, to assess the impact over a number of metrics to fully understand the implications of mission specification changes. The methodology Mission Specifications and Fleet Implications Technique (MS-FIT) identifies stakeholder requirements that will be tracked at either the vehicle or fleet level and leverages them to build an environment that will allow joint evaluation to facilitate increased knowledge about the full implications of mission specification adoption. Additionally laid out is an approach on how to select prospective routes for intermediate stops based on fuel burn and operating cost considerations. Guidance is provided on how to filter down a list of candidate airports to those most viable as well as regions of the world most likely to benefit from intermediate stops. Three sample problems were used to demonstrate the viability of MS-FIT: cruise speed reduction, design mission range reduction, and the combination of speed and range reduction. Each problem was able to demonstrate different implications from the implementation of the different specification changes. Speed reduction can negatively impacts cost while range reduction has consequences to noise at the intermediate airports. The combination of the two draws in negative implications from both even though the environmental benefits are better. Finally, an analysis of some of the assumptions was conducted to examine the sensitivity to the results of speed and range reduction. These include variation in costs, reductions in annual utilization of aircraft, and variation in intermediate stop adoption. Speed reduction is strongly sensitive to increases in crew and maintenance rates while landing fees significantly eat into the benefits of range reduction and intermediate stops. Minor utilization reductions can significantly reduce the viability of speed reduction as the increase in capital costs offset all the savings from fuel reduction while range reduction is a little less sensitive. Intermediate stop variation does not eliminate the benefits of range reduction and even can provide cost savings depending on the design range of the reduced variant but it can have consequences to airport noise to higher traffic airports. With the proposed framework, additional information is available to fully understand the implications with respect to fuel burn, NOₓ emissions, operating cost, capital cost, noise, and safety. This can then inform decision makers on whether pursuing a particular mission specification strategy is advantageous or not.

Thumbnail Image
Item

A reliability-based measurement of interoperability for conceptual-level systems of systems

2014-07-01 , Jones Wyatt, Elizabeth Ann

The increasing complexity of net-centric warfare requires assets to cooperate to achieve mission success. Such cooperation requires the integration of many heterogeneous systems into an interoperable system-of-systems (SoS). Interoperability can be considered a metric of an architecture, and must be understood as early as the conceptual design phase. This thesis approaches interoperability by first creating a general definition of interoperability, identifying factors that affect it, surveying existing models of interoperability, and identifying fields that can be leveraged to perform a measurement, including reliability theory and graph theory. The main contribution of this thesis is the development of the Architectural Resource Transfer and Exchange Measurement of Interoperability for Systems of Systems, or ARTEMIS methodology. ARTEMIS first outlines a quantitative measurement of system pair interoperability using reliability in series and in parallel. This step incorporates operational requirements and the capabilities of the system pair. Next, a matrix of interoperability values for each resource exchange in an operational process is constructed. These matrices can be used to calculate the interoperability of a single resource exchange, IResource, and layered to generate a weighted adjacency matrix of the entire SoS. This matrix can be plugged in to a separate model to link interoperability with the mission performance of the system of systems. One output of the M&S is a single value ISoS that can be used to rank architecture alternatives based on their interoperability. This allows decision makers to narrow down a large design space quickly using interoperability as one of several criteria, such as cost, complexity, or risk. A canonical problem was used to test the methodology. A discrete event simulation was constructed to model a small unmanned aircraft system performing a search and rescue mission. Experiments were performed to understand how changing the systems' interoperability affected the overall interoperability; how the resource transfer matrices were layered; and if the outputs could be calculated without time- and computationally-intensive stochastic modeling. It was found that although a series model of reliability could predict a range of IResource, M&S is required to provide exact values useful for ranking. Overall interoperability ISoS can be predicted using a weighted average of IResource, but the weights must be determined by M&S. Because a single interoperability value based on performance is not unique to an architecture configuration, network analysis was conducted to assess further properties of a system of systems that may affect cost or vulnerability of the network. The eigenvalue-based Coefficient of Networked Effects (CNE) was assessed and found to be an appropriate measure of network complexity. Using the outputs of the discrete event simulation, it was found that networks with higher interoperability tended to have more networked effects. However, there was not enough correlation between the two metrics to use them interchangeably. ARTEMIS recommends that both metrics be used to assess a networked SoS. This methodology is of extreme value to decision-makers by enabling trade studies at the SoS level that were not possible previously. It can provide decision-makers with information about an architecture and allow them to compare existing and potential systems of systems during the early phases of acquisition. This method is unique because it does not rely on qualitative assessments of technology maturity or adherence to standards. By enabling a rigorous, objective mathematical measurement of interoperability, decision-makers will better be able to select architecture alternatives that meet interoperability goals and fulfill future capability requirements.

Thumbnail Image
Item

A combined global and local methodology for launch vehicle trajectory design-space exploration and optimization

2014-04-09 , Steffens, Michael J.

Trajectory optimization is an important part of launch vehicle design and operation. With the high costs of launching payload into orbit, every pound that can be saved increases affordability. One way to save weight in launch vehicle design and operation is by optimizing the ascent trajectory. Launch vehicle trajectory optimization is a field that has been studied since the 1950’s. Originally, analytic solutions were sought because computers were slow and inefficient. With the advent of computers, however, different algorithms were developed for the purpose of trajectory optimization. Computer resources were still limited, and as such the algorithms were limited to local optimization methods, which can get stuck in specific regions of the design space. Local methods for trajectory optimization have been well studied and developed. Computer technology continues to advance, and in recent years global optimization has become available for application to a wide variety of problems, including trajectory optimization. The aim of this thesis is to create a methodology that applies global optimization to the trajectory optimization problem. Using information from a global search, the optimization design space can be reduced and a much smaller design space can be analyzed using already existing local methods. This allows for areas of interest in the design space to be identified and further studied and helps overcome the fact that many local methods can get stuck in local optima. The design space included in trajectory optimization is also considered in this thesis. The typical optimization variables are initial conditions and flight control variables. For direct optimization methods, the trajectory phase structure is currently chosen a priori. Including trajectory phase structure variables in the optimization process can yield better solutions. The methodology and phase structure optimization is demonstrated using an earth-to-orbit trajectory of a Delta IV Medium launch vehicle. Different methods of performing the global search and reducing the design space are compared. Local optimization is performed using the industry standard trajectory optimization tool POST. Finally, methods for varying the trajectory phase structure are presented and the results are compared.

Thumbnail Image
Item

Sustainability of multimodal intercity transportation using a hybrid system dynamics and agent-based modeling approach

2014-11-17 , Hivin, Ludovic F.

Demand for intercity transportation has increased significantly in the past decades and is expected to continue to follow this trend in the future. In the meantime, concern about the environmental impact and potential climate change associated with this demand has grown, resulting in an increasing importance of climate impact considerations in the overarching issue of sustainability. This results in discussions on new regulations, policies and technologies to reduce transportation's climate impact. Policies may affect the demand for the different transportation modes through increased travel costs, increased market share of more fuel efficient vehicles, or even the introduction of new modes of transportation. However, the effect of policies and technologies on mobility, demand, fleet composition and the resulting climate impact remains highly uncertain due to the many interdependencies. This motivates the creation of a parametric modeling and simulation environment to explore a wide variety of policy and technology scenarios and assess the sustainability of transportation. In order to capture total transportation demand and the potential mode shifts, a multimodal approach is necessary. The complexity of the intercity transportation System-of-Systems calls for a hybrid Agent-Based Modeling and System Dynamics paradigm to better represent both micro-level and macro-level behaviors. Various techniques for combining these paradigms are explored and classified to serve as a hybrid modeling guide. A System Dynamics approach is developed, that integrates socio-economic factors, mode performance, aggregated demand and climate impact. It is used to explore different policy and technology scenarios, and better understand the dynamic behavior of the intercity transportation System-of-Systems. In order to generate the necessary data to create and validate the System Dynamics model, an Agent-Based model is used due to its capability to better capture the behavior of a collection of sentient entities. Equivalency of both models is ensured through a rigorous cross-calibration process. Through the use of fleet models, the fuel burn and life cycle emissions from different modes of transportation are quantified. The radiative forcing from the main gaseous and aerosol species is then obtained through radiative transfer calculations and regional variations are discussed. This new simulation environment called the environmental Ground and Air Mode Explorer (eGAME) is then used to explore different policy and technology scenarios and assess their effect on transportation demand, fleet efficiencies and the resulting climate impact. The results obtained with this integrated assessment tool aim to support a scenario-based decision making approach and provide insight into the future of the U.S. transportation system in a climate constrained environment.

Thumbnail Image
Item

Aerothermodynamic cycle design and optimization method for aircraft engines

2014-08-22 , Ford, Sean T.

This thesis addresses the need for an optimization method which can simultaneously optimize and balance an aerothermodynamic cycle. The method developed is be able to control cycle design variables at all operating conditions to meet the performance requirements while controlling any additional variables which may be used to optimize the cycle and maintaining all operating limits and engine constraints. The additional variables represent degrees of freedom above what is needed for conservation of mass and energy in the engine system. The motivation for such a method is derived from variable cycle engines, however it is general enough to use with most engine architectures. The method is similar to many optimization algorithms but differs in its implementation to an aircraft engine by combining the cycle balance and optimization using a Newton-Raphson cycle solver to efficiently find cycle designs for a wide range of engine architectures with extra degrees of freedom not needed to balance the cycle. Combination of the optimization with the cycle solver greatly speeds up the design and optimization process. A detailed process description for implementation of the method is provided as well as a proof of concept using several analytical test functions. Finally, the method is demonstrated on a separate flow turbofan model. Limitations and applications of the method are further explored including application to a multi-design point methodology.

Thumbnail Image
Item

STASE: set theory-influenced architecture space exploration

2014-07-01 , Sharma, Jonathan

The first of NASA's high-level strategic goals is to extend and sustain human activities across the solar system. As the United States moves into the post-Shuttle era, meeting this goal is more challenging than ever. There are several desired outcomes for this goal, including development of an integrated architecture and capabilities for safe crewed and cargo missions beyond low Earth orbit. NASA's Flexible Path for the future human exploration of space provides the guidelines to achieve this outcome. Designing space system architectures to satisfy the Flexible Path starts early in design, when a downselection process works to reduce the broad spectrum of feasible system architectures into a more refined set that contains a handful of alternatives that are to be considered and studied further in the detailed design phases. This downselection process is supported by what is referred to as architecture space exploration (ASE). ASE is a systems engineering process which generates the design knowledge necessary to enable informed decision-making. The broad spectrum of potential system architectures can be impractical to evaluate. As the system architecture becomes more complex in its structure and decomposition, its space encounters a factorial growth in the number of alternatives to be considered. This effect is known in the literature as combinatorial explosion. For the Flexible Path, the development of new space system architectures can occur over the period of a decade or more. During this time, a variety of changes can occur which lead to new requirements that necessitate the development of new technologies, or changes in budget and schedule. Developing comprehensive and quantitative design knowledge early during design helps to address these challenges. Current methods focus on a small number of system architecture alternatives. From these alternatives, a series of 'one off' -type of trade studies are performed to refine and generate more design knowledge. These small-scale studies are unable to adequately capture the broad spectrum of possible architectures and typically use qualitative knowledge. The focus of this research is to develop a systems engineering method for system-level ASE during pre-phase A design that is rapid, exhaustive, flexible, traceable, and quantitative. Review of literature found a gap in currents methods that were able to achieve this research objective. This led to the development of the Set Theory-Influenced Architecture Space Exploration (STASE) methodology. The downselection process is modeled as a decision-making process with STASE serving as a supporting systems engineering method. STASE is comprised of two main phases: system decomposition and system synthesis. During system decomposition, the problem is broken down into three system spaces. The architecture space consists of the categorical parameters and decisions that uniquely define an architecture, such as the physical and functional aspects. The design space contains the design parameters that uniquely define individual point designs for a given architecture. The objective space holds the objectives that are used in comparing alternatives. The application of set theory across the system spaces enables an alternative form of representing system alternatives. This novel application of set theory allows the STASE method to mitigate the problem of combinatorial explosion. The fundamental definitions and theorems of set theory are used to form the mathematical basis for the STASE method. A series of hypotheses were formed to develop STASE in a scientific way. These hypotheses are confirmed by experiments using a proof of concept over a subset of the Flexible Path. The STASE method results are compared against baseline results found using the traditional process of representing individual architectures as the system alternatives. The comparisons highlight many advantages of the STASE method. The greatest advantage is that STASE comprehensively explores the architecture space more rapidly than the baseline. This is because the set theory-influenced representation of alternatives has a summation growth with system complexity in the architecture space. The resultant option subsets provide additional design knowledge that enables new ways of visualizing results and comparing alternatives during early design. The option subsets can also account for changes in some requirements and constraints so that new analysis of system alternatives is not required. An example decision-making process was performed for the proof of concept. This notional example starts from the entire architecture space with the goal of minimizing the total cost and the number of launches. Several decisions are made for different architecture parameters using the developed data visualization and manipulation techniques until a complete architecture was determined. The example serves as a use-case example that walks through the implementation of the STASE method, the techniques for analyzing the results, and the steps towards making meaningful architecture decisions.

Thumbnail Image
Item

CONTRAST: A conceptual reliability growth approach for comparison of launch vehicle architectures

2014-11-17 , Zwack, Mathew R.

In 2004, the NASA Astronaut Office produced a memo regarding the safety of next generation launch vehicles. The memo requested that these vehicles have a probability of loss of crew of at most 1 in 1000 flights, which represents nearly an order of magnitude decrease from current vehicles. The goal of LOC of 1 in 1000 flights has since been adopted by the launch vehicle design community as a requirement for the safety of future vehicles. This research addresses the gap between current vehicles and future goals by improving the capture of vehicle architecture effects on reliability and safety. Vehicle architecture pertains to the physical description of the vehicle itself, which includes manned or unmanned, number of stages, number of engines per stage, engine cycle types, redundancy, etc. During the operations phase of the vehicle life-cycle it is clear that each of these parameters will have an inherent effect on the reliability and safety of the vehicle. However, the vehicle architecture is typically determined during the early conceptual design phase when a baseline vehicle is selected. Unless a great amount of money and effort is spent, the architecture will remain relatively constant from conceptual design through operations. Due to the fact that the vehicle architecture is essentially “locked-in” during early design, it is expected that much of the vehicle's reliability potential will also be locked-in. This observation leads to the conclusion that improvement of vehicle reliability and safety in the area of vehicle architecture must be completed during early design. Evaluation of the effects of different architecture decisions must be performed prior to baseline selection, which helps to identify a vehicle that is most likely to meet the reliability and safety requirements when it reaches operations. Although methods exist for evaluating reliability and safety during early design, weaknesses exist when trying to evaluate all architecture effects simultaneously. The goal of this research was therefore to formulate and implement a method that is capable of quantitatively evaluating vehicle architecture effects on reliability and safety during early conceptual design. The ConcepTual Reliability Growth Approach for CompariSon of Launch Vehicle ArchiTectures (CONTRAST) was developed to meet this goal. Using the strengths of existing techniques a hybrid approach was developed, which utilizes a reliability growth projection to evaluate the vehicles. The growth models are first applied at the subsystem level and then a vehicle level projection is generated using a simple system level fault tree. This approach allows for the capture of all trades of interest at the subsystem level as well as many possible trades at the assembly level. The CONTRAST method is first tested on an example problem, which compares the method output to actual data from the Space Transportation System (STS). This example problem illustrates the ability of the CONTRAST method to capture reliability growth trends seen during vehicle operations. It also serves as a validation for the development of the reliability growth model assumptions for future applications of the method. The final chapter of the thesis applies the CONTRAST method to a relevant launch vehicle, the Space Launch System (SLS), which is currently under development. Within the application problem, the output of the method is first used to check that the primary research objective has been met. Next, the output is compared to a state-of-the-art tool in order to demonstrate the ability of the CONTRAST method to alleviate one of the primary consequences of using existing techniques. The final section within this chapter presents an analysis of the booster and upper stage block upgrade options for the SLS vehicle. A study of the upgrade options was carried out because the CONTRAST method is uniquely suited to look at the effects of such strategies. The results from the study of SLS block upgrades give interesting observations regarding the desired development order and upgrade strategy. Ultimately this application problem demonstrates the merits of applying the CONTRAST method during early design. This approach provides the designer with more information in regard to the expected reliability of the vehicle, which will ultimately enable the selection of a vehicle baseline that is most likely to meet the future requirements.

Thumbnail Image
Item

Formulation of control strategies for requirement definition of multi-agent surveillance systems

2014-08-21 , Aksaray, Derya

In a multi-agent system (MAS), the overall performance is greatly influenced by both the design and the control of the agents. The physical design determines the agent capabilities, and the control strategies drive the agents to pursue their objectives using the available capabilities. The objective of this thesis is to incorporate control strategies in the early conceptual design of an MAS. As such, this thesis proposes a methodology that mainly explores the interdependency between the design variables of the agents and the control strategies used by the agents. The output of the proposed methodology, i.e. the interdependency between the design variables and the control strategies, can be utilized in the requirement analysis as well as in the later design stages to optimize the overall system through some higher fidelity analyses. In this thesis, the proposed methodology is applied to a persistent multi-UAV surveillance problem, whose objective is to increase the situational awareness of a base that receives some instantaneous monitoring information from a group of UAVs. Each UAV has a limited energy capacity and a limited communication range. Accordingly, the connectivity of the communication network becomes essential for the information flow from the UAVs to the base. In long-run missions, the UAVs need to return to the base for refueling with certain frequencies depending on their endurance. Whenever a UAV leaves the surveillance area, the remaining UAVs may need relocation to mitigate the impact of its absence. In the control part of this thesis, a set of energy-aware control strategies are developed for efficient multi-UAV surveillance operations. To this end, this thesis first proposes a decentralized strategy to recover the connectivity of the communication network. Second, it presents two return policies for UAVs to achieve energy-aware persistent surveillance. In the design part of this thesis, a design space exploration is performed to investigate the overall performance by varying a set of design variables and the candidate control strategies. Overall, it is shown that a control strategy used by an MAS affects the influence of the design variables on the mission performance. Furthermore, the proposed methodology identifies the preferable pairs of design variables and control strategies through low fidelity analysis in the early design stages.

Thumbnail Image
Item

A methodology for the evaluation of training effectiveness during early phase defense acquisition

2014-06-27 , Brown, Cynthia Chalese

Today's economic environment requires for a greater emphasis to be placed on the development of cost-effective solutions to meet military capability based requirements. The Joint Capabilities Integration and Development System (JCIDS) process is designed to identify materiel and non-materiel solutions to fill defense department capability requirements and gaps. Non-materiel solutions include: Doctrine, Organization, Training, Materiel, Leadership and Education, Personnel, Facilities, and Policy (DOTMLPF-P) changes. JCIDS specifies that all non-materiel solutions be analyzed and recommendations be made accordingly following a capability-based assessment (CBA). Guidance for performing CBA analysis provides minimal information on how to predict training effectiveness and as a result training investments are not properly assessed and considered as a viable alternative. Investigations into the ability to predict versus evaluate training performance and to quantify uncertainty in training system design are two identified gaps in the capability of existing training evaluation methods. To address these issues, a Methodology to Predict and Evaluate the Effectiveness of Training (MPEET) has been developed. To address the gap in predictive capability MPEET uses primary elements of learning theory and instructional design to predict the cost-effectiveness of a training program, and recommends training alternatives based on decision-maker preferences for each of the cost and effectiveness criteria. The use of educational and instructional theory involves developing and ensuring human performance requirements will be met after training. Utility theory is used to derive an overall criterion consisting of both cost and effectiveness attributes. MPEET uses this criterion as a key variable in determining how to properly allocate resources to gain maximum training effectiveness. To address the gap in quantifying uncertainty in training performance, probability theory is used within a modeling and simulation environment to create and evaluate previously deterministic variables. Effectiveness and cost variables are assigned probability distributions that reflect the applicable range of uncertainty. MPEET is a systems engineering based decision-making tool. It enhances the instructional design process, which is rooted in the fields of education and psychology, by adding an objective verification step to determine how well instructional strategies are used in the design of a training program to meet the required learning objectives. A C-130J pilot case study is used to demonstrate the application of MPEET and to show the plausibility of the approach. For the case study, metrics are derived to quantify the requirement for knowledge, skills, and attitudes in the C-130J pilot training system design. Instructional strategies were defined specifically for the C-130J training program. Feasible training alternatives were generated and evaluated for cost and effectiveness. Using information collected from decision-maker preferences for cost and effectiveness variables, a new training program is created and comparisons are made to the original. The case study allows tradeoffs to be performed quantitatively between the variable importance weightings and mean value of the probabilistic variables. Overall, it is demonstrated that MPEET provides the capability to assess the cost and effectiveness of training system design and is an enabler to the inclusion of training as an independent non-materiel alternative solution during the CBA process. Although capability gaps in the defense acquisition process motivated the development of MPEET its applicability extends to any training program following the instructional design process where the assumed constraints are not prohibitive.