Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 29
  • Item
    A robust multi-objective statistical improvement approach to electric power portfolio selection
    (Georgia Institute of Technology, 2012-11-13) Murphy, Jonathan Rodgers
    Motivated by an electric power portfolio selection problem, a sampling method is developed for simulation-based robust design that builds on existing multi-objective statistical improvement methods. It uses a Bayesian surrogate model regressed on both design and noise variables, and makes use of methods for estimating epistemic model uncertainty in environmental uncertainty metrics. Regions of the design space are sequentially sampled in a manner that balances exploration of unknown designs and exploitation of designs thought to be Pareto optimal, while regions of the noise space are sampled to improve knowledge of the environmental uncertainty. A scalable test problem is used to compare the method with design of experiments (DoE) and crossed array methods, and the method is found to be more efficient for restrictive sample budgets. Experiments with the same test problem are used to study the sensitivity of the methods to numbers of design and noise variables. Lastly, the method is demonstrated on an electric power portfolio simulation code.
  • Item
    Optimal allocation of thermodynamic irreversibility for the integrated design of propulsion and thermal management systems
    (Georgia Institute of Technology, 2012-11-13) Maser, Adam Charles
    More electric aircraft systems, high power avionics, and a reduction in heat sink capacity have placed a larger emphasis on correctly satisfying aircraft thermal management requirements during conceptual design. Thermal management systems must be capable of dealing with these rising heat loads, while simultaneously meeting mission performance. Since all subsystem power and cooling requirements are ultimately traced back to the engine, the growing interactions between the propulsion and thermal management systems are becoming more significant. As a result, it is necessary to consider their integrated performance during the conceptual design of the aircraft gas turbine engine cycle to ensure that thermal requirements are met. This can be accomplished by using thermodynamic modeling and simulation to investigate the subsystem interactions while conducting the necessary design trades to establish the engine cycle. As the foundation for this research, a parsimonious, transparent thermodynamic model of propulsion and thermal management systems performance was created with a focus on capturing the physics that have the largest impact on propulsion design choices. A key aspect of this approach is the incorporation of physics-based formulations involving the concurrent usage of the first and second laws of thermodynamics to achieve a clearer view of the component-level losses. This is facilitated by the direct prediction of the exergy destruction distribution throughout the integrated system and the resulting quantification of available work losses over the time history of the mission. The characterization of the thermodynamic irreversibility distribution helps give the designer an absolute and consistent view of the tradeoffs associated with the design of the system. Consequently, this leads directly to the question of the optimal allocation of irreversibility across each of the components. An irreversibility allocation approach based on the economic concept of resource allocation is demonstrated for a canonical propulsion and thermal management systems architecture. By posing the problem in economic terms, exergy destruction is treated as a true common currency to barter for improved efficiency, cost, and performance. This then enables the propulsion systems designer to better fulfill system-level requirements and to create a system more robust to future requirements.
  • Item
    Design space exploration of stochastic system-of-systems simulations using adaptive sequential experiments
    (Georgia Institute of Technology, 2012-06-25) Kernstine, Kemp H.
    The complexities of our surrounding environments are becoming increasingly diverse, more integrated, and continuously more difficult to predict and characterize. These modeling complexities are ever more prevalent in System-of-Systems (SoS) simulations where computational times can surpass real-time and are often dictated by stochastic processes and non-continuous emergent behaviors. As the number of connections continue to increase in modeling environments and the number of external noise variables continue to multiply, these SoS simulations can no longer be explored with traditional means without significantly wasting computational resources. This research develops and tests an adaptive sequential design of experiments to reduce the computational expense of exploring these complex design spaces. Prior to developing the algorithm, the defining statistical attributes of these spaces are researched and identified. Following this identification, various techniques capable of capturing these features are compared and an algorithm is synthesized. The final algorithm will be shown to improve the exploration of stochastic simulations over existing methods by increasing the global accuracy and computational speed, while reducing the number of simulations required to learn these spaces.
  • Item
    A probabilistic technique for the assessment of complex dynamic system resilience
    (Georgia Institute of Technology, 2012-04-24) Balchanos, Michael Gregory
    In the presence of operational uncertainty, one of the greatest challenges in systems engineering is to ensure system effectiveness, mission capability and survivability. Safety management is shifting from passive, reactive and diagnosis-based approaches to autonomous architectures that will manage safety and survivability through active, proactive and prognosis-based solutions. Resilience engineering is an emerging discipline, with alternative recommendations on safer and more survivable system architectures. A resilient system can "absorb" the impact of change due to unexpected disturbances, while it "adapts" to change, in order to maintain its physical integrity and mission capability. A framework of proposed resilience estimations is the basis for a scenario-based assessment technique, driven by modeling and simulation-based (M&S) analysis, for obtaining system performance, health monitoring, damage propagation and overall mission capability responses. For the technique development and testing, a small-scale canonical problem has been formulated, involving a reconfigurable spring-mass-damper system, in a multi-spring configuration. Operational uncertainty is introduced through disturbance factors, such as external forces with varying magnitude, input frequency, event duration and occurrence time. Case studies with varying levels of damping and alternative reconfiguration strategies return the effects of operational uncertainty on system performance, mission capability, and survivability, as well as on the "restore", "absorb", and "adapt" resilience capacities. The Topological Investigation for Resilient and Effective Systems, through Increased Architecture Survivability (TIRESIAS) technique is demonstrated for a reduced scale, reconfigurable naval cooling network application. With uncertainty effects modeled through network leak combinations, TIRESIAS provides insight on leak effects to survival times, mission capability degradations, and on resilience function capacities, for the baseline configuration. Comparative case studies were conducted for different architecture configurations, which have been generated for different total number of control valves and valve locations on the topology.
  • Item
    A neural network construction method for surrogate modeling of physics-based analysis
    (Georgia Institute of Technology, 2012-04-04) Sung, Woong Je
    A connectivity adjusting learning algorithm, Optimal Brain Growth (OBG) was proposed. Contrast to the conventional training methods for the Artificial Neural Network (ANN) which focus on the weight-only optimization, the OBG method trains both weights and connectivity of a network in a single training process. The standard Back-Propagation (BP) algorithm was extended to exploit the error gradient information of the latent connection whose current weight has zero value. Based on this, the OBG algorithm makes a rational decision between a further adjustment of an existing connection weight and a creation of a new connection having zero weight. The training efficiency of a growing network is maintained by freezing stabilized connections in the further optimization process. A stabilized computational unit is also decomposed into two units and a particular set of decomposition rules guarantees a seamless local re-initialization of a training trajectory. The OBG method was tested for the multiple canonical, regression and classification problems and for a surrogate modeling of the pressure distribution on transonic airfoils. The OBG method showed an improved learning capability in computationally efficient manner compared to the conventional weight-only training using connectivity-fixed Multilayer Perceptrons (MLPs).
  • Item
    Rapid Architecture Alternative Modeling (RAAM): a framework for capability-based analysis of system of systems architectures
    (Georgia Institute of Technology, 2012-04-04) Iacobucci, Joseph Vincent
    The current national security environment and fiscal tightening make it necessary for the Department of Defense to transition away from a threat based acquisition mindset towards a capability based approach to acquire portfolios of systems. This requires that groups of interdependent systems must regularly interact and work together as systems of systems to deliver desired capabilities. Technological advances, especially in the areas of electronics, computing, and communications also means that these systems of systems are tightly integrated and more complex to acquire, operate, and manage. In response to this, the Department of Defense has turned to system architecting principles along with capability based analysis. However, because of the diversity of the systems, technologies, and organizations involved in creating a system of systems, the design space of architecture alternatives is discrete and highly non-linear. The design space is also very large due to the hundreds of systems that can be used, the numerous variations in the way systems can be employed and operated, and also the thousands of tasks that are often required to fulfill a capability. This makes it very difficult to fully explore the design space. As a result, capability based analysis of system of systems architectures often only considers a small number of alternatives. This places a severe limitation on the development of capabilities that are necessary to address the needs of the war fighter. The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top `n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis.
  • Item
    Framework for robust design: a forecast environment using intelligent discrete event simulation
    (Georgia Institute of Technology, 2012-03-29) Beisecker, Elise K.
    The US Navy is shifting to power projection from the sea which stresses the capabilities of its current fleet and exposes a need for a new surface connector. The design of complex systems in the presence of changing requirements, rapidly evolving technologies, and operational uncertainty continues to be a challenge. Furthermore, the design of future naval platforms must take into account the interoperability of a variety of heterogeneous systems and their role in a larger system-of-systems context. To date, methodologies to address these complex interactions and optimize the system at the macro-level have lacked a clear direction and structure and have largely been conducted in an ad-hoc fashion. Traditional optimization has centered around individual vehicles with little regard for the impact on the overall system. A key enabler in designing a future connector is the ability to rapidly analyze technologies and perform trade studies using a system-of-systems level approach. The objective of this work is a process that can quantitatively assess the impacts of new capabilities and vessels at the systems-of-systems level. This new methodology must be able to investigate diverse, disruptive technologies acting on multiple elements within the system-of-systems architecture. Illustrated through a test case for a Medium Exploratory Connector (MEC), the method must be capable of capturing the complex interactions between elements and the architecture and must be able to assess the impacts of new systems). Following a review of current methods, six gaps were identified, including the need to break the problem into subproblems in order to incorporate a heterogeneous, interacting fleet, dynamic loading, and dynamic routing. For the robust selection of design requirements, analysis must be performed across multiple scenarios, which requires the method to include parametric scenario definition. The identified gaps are investigated and methods recommended to address these gaps to enable overall operational analysis across scenarios. Scenarios are fully defined by a scheduled set of demands, distances between locations, and physical characteristics that can be treated as input variables. Introducing matrix manipulation into discrete event simulations enables the abstraction of sub-processes at an object level and reduces the effort required to integrate new assets. Incorporating these linear algebra principles enables resource management for individual elements and abstraction of decision processes. Although the run time is slightly greater than traditional if-then formulations, the gain in data handling abilities enables the abstraction of loading and routing algorithms. The loading and routing problems are abstracted and solution options are developed and compared. Realistic loading of vessels and other assets is needed to capture the cargo delivery capability of the modeled mission. The dynamic loading algorithm is based on the traditional knapsack formulation where a linear program is formulated using the lift and area of the connector as constraints. The schedule of demands from the scenarios represents additional constraints and the reward equation. Cargo available is distributed between cargo sources thus an assignment problem formulation is added to the linear program, requiring the cargo selected to load on a single connector to be available from a single load point. Dynamic routing allows a reconfigurable supply chain to maintain a robust and flexible operation in response to changing customer demands and operating environment. Algorithms based on vehicle routing and computer packet routing are compared across five operational scenarios, testing the algorithms ability to route connectors without introducing additional wait time. Predicting the wait times of interfaces based on connectors en route and incorporating reconsideration of interface to use upon arrival performed consistently, especially when stochastic load times are introduced, is expandable to a large scale application. This algorithm selects the quickest load-unload location pairing based on the connectors routed to those locations and the interfaces selected for those connectors. A future connector could have the ability to unload at multiple locations if a single load exceeds the demand at an unload location. The capability for multiple unload locations is considered a special case in the calculation of the unload location in the routing. To determine the unload location to visit, a traveling salesman formulation is added to the dynamic loading algorithm. Using the cost to travel and unload at locations balanced against the additional cargo that could be delivered, the order and locations to visit are selected. Predicting the workload at load and unload locations to route vessels with reconsideration to handle disturbances can include multiple unload locations and creates a robust and flexible routing algorithm. The incorporation of matrix manipulation, dynamic loading, and dynamic routing enables the robust investigation of the design requirements for a new connector. The robust process will use shortfall, capturing the delay and lack of cargo delivered, and fuel usage as measures of performance. The design parameters for the MEC, including the number available and vessel characteristics such as speed and size were analyzed across four ways of testing the noise space. The four testing methods are: a single scenario, a selected number of scenarios, full coverage of the noise space, and feasible noise space. The feasible noise space is defined using uncertainty around scenarios of interest. The number available, maximum lift, maximum area, and SES speed were consistently design drivers. There was a trade-off in the number available and size along with speed. When looking at the feasible space, the relationship between size and number available was strong enough to reverse the number available, to desiring fewer and larger ships. The secondary design impacts come from factors that directly impacted the time per trip, such as the time between repairs and time to repair. As the noise sampling moved from four scenario to full coverage to feasible space, the option to use interfaces were replaced with the time to load at these locations and the time to unload at the beach gained importance. The change in impact can be attributed to the reduction in the number of needed trips with the feasible space. The four scenarios had higher average demand than the feasible space sampling, leading to loading options being more important. The selection of the noise sampling had an impact of the design requirements selected for the MEC, indicating the importance of developing a method to investigate the future Naval assets across multiple scenarios at a system-of-systems level.
  • Item
    A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation
    (Georgia Institute of Technology, 2012-03-28) Akram, Muhammad Farooq
    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to-be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system, make it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
  • Item
    A methodology for the valuation and selection of adaptable technology portfolios and its application to small and medium airports
    (Georgia Institute of Technology, 2012-03-27) Pinon, Olivia Julie
    The increase in the types of airspace users (large aircraft, small and regional jets, very light jets, unmanned aerial vehicles, etc.), as well as the very limited number of future new airport development projects are some of the factors that will characterize the next decades in air transportation. These factors, associated with a persistent growth in air traffic will worsen the current gridlock situation experienced at some major airports. As airports are becoming the major capacity bottleneck to continued growth in air traffic, it is therefore primordial to make the most efficient use of the current, and very often, underutilized airport infrastructure. This research thus proposes to address the increase in air traffic demand and resulting capacity issues by considering the implementation of operational concepts and technologies at underutilized airports. However, there are many challenges associated with sustaining the development of this type of airports. First, the need to synchronize evolving technologies with airports' needs and investment capabilities is paramount. Additionally, it was observed that the evolution of secondary airports, and their needs, is tightly linked to the environment in which they operate. In particular, sensitivity of airports to changes in the dynamics of their environment is important, therefore requiring that the factors that drive the need for capacity expansion be identified and characterized. Finally, the difficulty to evaluate risk and make financially viable decisions, particularly when investing in new technologies, cannot be ignored. This work thus focuses on the development of a methodology to address these challenges and ensure the sustainability of airport capacity-enhancement investments in a continuously changing environment. The four-step process developed in this research leverages the benefits yielded by impact assessment techniques, system dynamics modeling, and real options analysis to 1) provide the decision maker with a rigorous, structured, and traceable process for technology selection, 2) assess the combined impact of interrelated technologies, 3) support the translation of technology impact factors into airport performance indicators, and help identify the factors that drive the need for capacity expansion, and finally 4) enable the quantitative assessment of the strategic value of embedding flexibility in the formulation of technology portfolios and investment options. The proposed methodology demonstrates, through a change in demand at the airport modeled, the importance of being able to weigh both the technological and strategic performance of the technology portfolios considered. Hence, by capturing the time dimension and technology causality impacts in technology portfolio selection, this work helps identify key technologies or technology groupings, and assess their performance on airport metrics. By embedding flexibility in the formulation of investment scenarios, it provides the decision maker with a more accurate picture of the options available to him, as well as the time and sequence under which these should be exercised.
  • Item
    Integrating dependencies into the technology portfolio: a feed-forward case study for near-earth asteroids
    (Georgia Institute of Technology, 2011-11-15) Taylor, Christianna Elizabeth
    Technology Portfolios are essential to the evolution of large complex systems. In an effort to harness the power of new technologies, technology portfolios are used to predict the value of integrating them into the project. This optimization of the technology portfolio creates large complex design spaces; however, many processes operate on the assumption that their technology elements have no dependency on each other, because dependencies are not well defined. This independence assumption simplifies the process, but suggests that these environments are missing out on decision power and fidelity. Therefore, this thesis proposed a way to explain the variations in Portfolio recommendations as a function of adding dependencies. Dependencies were defined in accordance with their development effort figures of merit and possible relationships. The thesis then went on to design a method to integrate two dependency classes into the technology portfolio framework to showcase the effect of incorporating dependencies. Results indicated that Constraint Dependencies reduced the portfolio or stayed the same, while Value Dependencies changed the portfolio optimization completely; making the user compare two different optimization results. Both indicated that they provided higher fidelity with the inclusion of the information added. Furthermore, the upcoming NASA Near-Earth Asteroid Campaign was studied as a case study. This campaign is the plan to send humans to an asteroid by 2025 announced by President Obama in April 2010. The campaign involves multiple missions, capabilities, and technologies that must be demonstrated to enable deep-space human exploration. Therefore, this thesis capitalized on that intention to show how adopting technology in earlier missions can act as a feed-forward method to demonstrate technology for future missions. The thesis showed the baseline technology portfolio, integrated dependencies into the process, compared its findings to the baseline case, and ultimately showed how adding higher fidelity into the process changes the user's decisions. Findings concerning the Near-Earth Asteroid Campaign, the use of dependencies to add fidelity and implications for future work are discussed.