Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 3 of 3
Thumbnail Image
Item

A methodology for risk-informed launch vehicle architecture selection

2017-11-13 , Edwards, Stephen James

Modern society in the 21st century has become inseparably dependent on human mastery of the near-Earth regions of space. Billions of dollars in on-orbit assets provide a set of fundamental, requisite services to such diverse domains as telecom, military, banking, and transportation. While orbiting satellites provide these services, launch vehicles (LVs) are unquestionably the most critical piece of infrastructure in the space economy value chain. The past decade has seen a significant level of activity in LV development, including some fundamental changes to the industry landscape. Every space-faring nation is engaged in new program developments; most notable, however, is the surge in commercial investments and development efforts, which has been spurred by a combination of private investments by wealthy individuals, new government policies and acquisition strategies, and the increased competition that has resulted from both. In all the LV programs of today, affordability is acknowledged as the single biggest objective. Governments seek assured access to space that can be realized within constrained budgets, and commercial entities vie for survival, profitability, and market-share. From literature, it is clear that the biggest opportunity for affecting affordability resides in improving decision-making early on in the design process. However, a review of historical LV architecture studies shows that very little has changed over the past 50 years in how early architecting decisions are analyzed. In particular, architecture analyses of alternatives are still conducted deterministically, despite uncertainty being at its highest in the very early stages of design. This thesis argues that the ``design freedom'' that exists early on manifests itself as volitional uncertainty during the LV architect's deliberation, motivating the objective statement ``to develop a methodology for enabling risk-informed decision making during the architecture selection phase of LV programs.'' NASA's Risk-Informed Decision Making process is analyzed with respect to the particulars of the LV architecture selection problem. The most significant challenge is found to be LV performance modeling via trajectory optimization, which is not well suited to probabilistic analysis. To overcome this challenge, an empirical modeling approach is proposed. However, this in turn introduces the challenge of generalizing the empirical model, as creating distinct performance models for every architecture concept under consideration is considered infeasible. A review of the main drivers in LV trajectory performance observes T/W not only to be one of the parameters with most sensitivity, but also reveals it to be a functional in its true form. Based on the performance-driving nature of the T/W profile, and the fact that in its infinite-dimensional form it offers a common basis for representing diverse architectures, functional regression techniques are proposed as a potential means of constructing an architecture-spanning empirical performance model. A number of techniques are formulated and tested, and prove capable of supporting the LV performance modeling in support of risk-informed architecture selection.

Thumbnail Image
Item

A risk-value-based methodology for enterprise-level decision-making

2017-07-31 , Burgaud, Frederic

Despite its long lasting existence, aerospace remains a non-commoditized field. To sustain their market domination, the major companies need to commit to large capital investments and constant innovation, in spite of multiple sources of risk and uncertainty, and significant chances of failure. This makes aerospace programs particularly risky. However, successful programs more than compensate the costs of disappointing ones. In order to maximize the chances of a favorable outcome, a business-driven, multi-objective, and multi-risk approach is needed to ensure success, with particular attention to financial aspects. Additionally, aerospace programs involve multiple divisions within a company. Besides vehicle design, finance, sales, and production are crucial disciplines with decision power and influence on the outcome of the program. They are also tightly coupled, and the interdependencies existing between these disciplines should be exploited to unlock as much program-level value potential as possible. An enterprise-level approach should, therefore, be used. Finally, suborbital tourism programs are well suited as a case study for this research. Indeed, they are usually small companies starting their projects from scratch. Using a full enterprise-level analysis is thus necessary, but also more easily feasible than for larger groups. These motivations lead to the formulation of the research objective: to establish a methodology that enables informed enterprise-level decision-making under uncertainty and provides higher-value compromise solutions. The research objective can be decomposed into two main directions of study. First, current approaches are usually limited to the design aspect of the program and do not provide the optimization of other disciplines. This ultimately results in a de-facto sequential optimization, where principal-agent problems arise. Instead, a holistic implementation is proposed, which will enable an integrated enterprise-level optimization. The second part of this problem deals with decision-making with multiple objectives and multiple risks. Current methods of design under uncertainty are insufficient for this problem. First, they do not provide compelling results when several metrics are targeted. Additionally, variance does not properly fit the definition of risk, as it captures both the upside and downside uncertainty. Instead, the deviation of the Conditional Value at Risk (called here downside deviation) is used as a measure of value risk. Furthermore, objectives are categorized and aggregated into risk and value scores to facilitate convergence, visualization, and decisionmaking. As suborbital vehicles are complex non-linear systems, with many infeasible concepts and computationally expensive M&S environments, a time-efficient way to estimate the downside deviation needs to be used. As such, a new uncertainty propagation structure is used that involves regression and classification neural networks, as well as a Second-Order Third-Moment (SOTM) technique to compute statistical moments. The proposed process elements are combined, and integrated into a method following a modified Integrated Product and Process Development (IPPD) approach, using five main steps: establishing value, generating alternatives, evaluating alternatives, and making decisions. A new M&S environment is implemented and involves a design framework to which several business disciplines are added. A bottom-up approach is used to study the four research questions of this dissertation. At the lowest level of the implementation, an enhanced financial analysis is evaluated. Common financial valuation methods used in aerospace have heavy limitations: all of them rely on a very arbitrary discount rate despite its critical impact on the final value of the NPV. The proposed method provides detailed analysis capabilities and helps capture more value by enabling the optimization of the company’s capital structure. A sensitivity analysis also verifies the importance of the added factors in the proposed method. The second implementation step is to time-efficiently evaluate downside deviation. As such, regression and classification neural networks are implemented to estimate the base costs of the vehicle and speed up the vehicle sizing process. Business analyses are already time-efficient and therefore maintained. These neural networks ultimately show good validation prediction root-mean-square error (RMSE), which confirms their accuracy. The SOTM method is also checked and shows a downside deviation prediction accuracy equivalent to a 750-point Monte Carlo method. From a computation time standpoint, the use of neural networks is required for a reasonable convergence time, and the SOTM used jointly with neural networks results in an optimization time below 1 hour. The proposed approach for making risk/value trade-offs in the presence of multiple risks and objectives is then tested. First, the importance of using downside deviation is demonstrated by showing the risk estimation error made when using the standard deviation rather than the actual downside deviation. Additionally, the use of risk and value scores also helps decision-making from a qualitative and quantitative point of view. Indeed, it facilitates visualization by supplying a two-dimensional Pareto frontier, while still being able to color it to observe program features and cluster patterns. Furthermore, the problem with risk and value scores provides more optimal solutions, compared to the non-aggregated case, unless very large errors in weightings are committed. Finally, the proposed method provides good capabilities for identifying, ranking, and selecting optimal concepts. The last research question presents the following interrogation: does an enterpriselevel approach help improve the optimality of the overall program, and does it result in significantly different decision-making? Two elements of the enterprise-level approach are tested: the integrated optimization, and the use of additional enterprise-level objectives. In both cases, the resulting Pareto frontiers are significantly dominating their counterparts, demonstrating the usefulness of the enterprise-level approach from a quantitative point of view. It also shows that the enterprise-level approach results in significantly different decisions, and should, therefore, be applied early in the design process. Hence, the method provided the capabilities sought in the research objective. This research resulted in contributions in the financial analysis of aerospace programs, in design under multiple sources of uncertainty with multiple objectives, and in design optimization by proposing the adoption of an enterprise-level approach.

Thumbnail Image
Item

Ensemble-averaged dynamics of premixed, turbulent, harmonically excited flames

2017-04-07 , Humphrey, Luke

Increasing awareness of the negative impacts of pollutant emissions associated with combustion is driving increasingly stringent regulatory limits. In particular, oxides of nitrogen, generally referred to as NOx, now face strict limits. These restrictions have driven development of cleaner burning combustion systems. Because NOx formation increases significantly at elevated temperatures, one method to reduce NOx emissions is to burn the fuel at lower temperatures. By premixing the fuel and oxidizer prior to combustion significantly lower flame temperatures can be achieved, with corresponding reductions in NOx emissions. Unfortunately, premixed combustion systems are generally more prone to potentially problematic feedback between the unsteady heat release from the flame and unsteady pressure oscillations. This self-excited feedback loop is known as combustion instability. Because these oscillations are associated with unsteady pressure fluctuations they can degrade system performance, limit operability, and even lead to catastrophic failure. Understanding combustion instability is the primary motivation for the work presented in this thesis. The interaction of quasi-coherent and turbulent flame disturbances changes the spatio-temporal flame dynamics and turbulent flame speed, yet this interaction is not fully understood. Therefore, this thesis concentrates on identifying, understanding, and modeling these interactions. In order to address this topic, two primary avenues of research are followed: development and validation of a flame position model and experimental investigations of predicted ensemble-averaged flame speed sensitivity to flame curvature. First, a reduced order modeling approach for turbulent premixed flames is presented, based on the ensemble-averaged flame governing equation proposed by Shin and Lieuwen (2013). The turbulent modeling method is based on the G-equation approach used in laminar flame position and heat release studies. In order to capture the dependence of the ensemble-averaged turbulent flame speed on the ensemble-averaged flame curvature, the turbulent flame model incorporates a flame speed closure proposed by Shin and Lieuwen (2013). Application of the G-equation approach in different coordinate systems requires the inclusion of time-varying integration limits when calculating global flame area. This issue is discussed and the necessary corrections derived. Next, the reduced order turbulent modeling approach is validated by comparison with three-dimensional simulations of premixed flames, for both flame position and heat release response. The reduced order model is the linearized, allowing development of fully analytical flame position and heat release expressions. The use of the flame speed closure is shown to capture nonlinear effects associated with kinematic restoration. Second, the development of and results from a novel experimental facility are described. This facility has the capability to subject premixed flames to simultaneous broadband turbulent fluctuations and narrowband coherent fluctuations, which are introduced on the flame using an oscillating flame holder. Mie scattering images are used to identify the instantaneous flame edge position, while simultaneous high speed PIV measurements provide flow field information. Results from this experimental investigation include analysis of the ensemble-averaged flame dynamics, the ensemble-averaged turbulent displacement speed, the local ensemble-averaged area and consumption speed, and the dependence of both the displacement speed and consumption speed on the ensemble-averaged flame curvature. Finally, the flame speed sensitivity to curvature is quantified through calculation of the normalized turbulent Markstein displacement and consumption numbers. The results show that the amplitude of coherent flame wrinkles generally decreases with both downstream distance and increasing turbulence intensity, providing the first experimental validation of previous isothermal results. The average displacement and consumption speeds increase with downstream distance and turbulence intensity, reflecting the increasing wrinkled flame surface. The ensemble-averaged, phase dependent displacement and consumption speeds demonstrate clear modulation with the shape of the ensemble-averaged flame. Specifically, these turbulent flame speeds increase in regions of negative curvature. For both the displacement and consumption speed, the magnitude of the normalized turbulent Markstein length increases with ratio of the turbulent flame wrinkling length to the coherent wrinkling length when u'/SL0 >2.5 . For u'/SL0 < 2.5 the trends are less clear due to the presence of convecting disturbances which introduce additional fine scale wrinkles on the flame. Together the results presented in this thesis provide a foundation for modeling turbulent flames in the presence of quasi-coherent disturbances. The flame position can be modeled using the ensemble-averaged governing equation with the dynamical flame speed closure, and the corresponding heat release can be calculated from the turbulent consumption speed closure. The turbulent Markstein numbers and uncurved flame speed may be extracted from experimental or numerical data.