Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 14
Thumbnail Image
Item

A methodology for risk-informed launch vehicle architecture selection

2017-11-13 , Edwards, Stephen James

Modern society in the 21st century has become inseparably dependent on human mastery of the near-Earth regions of space. Billions of dollars in on-orbit assets provide a set of fundamental, requisite services to such diverse domains as telecom, military, banking, and transportation. While orbiting satellites provide these services, launch vehicles (LVs) are unquestionably the most critical piece of infrastructure in the space economy value chain. The past decade has seen a significant level of activity in LV development, including some fundamental changes to the industry landscape. Every space-faring nation is engaged in new program developments; most notable, however, is the surge in commercial investments and development efforts, which has been spurred by a combination of private investments by wealthy individuals, new government policies and acquisition strategies, and the increased competition that has resulted from both. In all the LV programs of today, affordability is acknowledged as the single biggest objective. Governments seek assured access to space that can be realized within constrained budgets, and commercial entities vie for survival, profitability, and market-share. From literature, it is clear that the biggest opportunity for affecting affordability resides in improving decision-making early on in the design process. However, a review of historical LV architecture studies shows that very little has changed over the past 50 years in how early architecting decisions are analyzed. In particular, architecture analyses of alternatives are still conducted deterministically, despite uncertainty being at its highest in the very early stages of design. This thesis argues that the ``design freedom'' that exists early on manifests itself as volitional uncertainty during the LV architect's deliberation, motivating the objective statement ``to develop a methodology for enabling risk-informed decision making during the architecture selection phase of LV programs.'' NASA's Risk-Informed Decision Making process is analyzed with respect to the particulars of the LV architecture selection problem. The most significant challenge is found to be LV performance modeling via trajectory optimization, which is not well suited to probabilistic analysis. To overcome this challenge, an empirical modeling approach is proposed. However, this in turn introduces the challenge of generalizing the empirical model, as creating distinct performance models for every architecture concept under consideration is considered infeasible. A review of the main drivers in LV trajectory performance observes T/W not only to be one of the parameters with most sensitivity, but also reveals it to be a functional in its true form. Based on the performance-driving nature of the T/W profile, and the fact that in its infinite-dimensional form it offers a common basis for representing diverse architectures, functional regression techniques are proposed as a potential means of constructing an architecture-spanning empirical performance model. A number of techniques are formulated and tested, and prove capable of supporting the LV performance modeling in support of risk-informed architecture selection.

Thumbnail Image
Item

A risk-value-based methodology for enterprise-level decision-making

2017-07-31 , Burgaud, Frederic

Despite its long lasting existence, aerospace remains a non-commoditized field. To sustain their market domination, the major companies need to commit to large capital investments and constant innovation, in spite of multiple sources of risk and uncertainty, and significant chances of failure. This makes aerospace programs particularly risky. However, successful programs more than compensate the costs of disappointing ones. In order to maximize the chances of a favorable outcome, a business-driven, multi-objective, and multi-risk approach is needed to ensure success, with particular attention to financial aspects. Additionally, aerospace programs involve multiple divisions within a company. Besides vehicle design, finance, sales, and production are crucial disciplines with decision power and influence on the outcome of the program. They are also tightly coupled, and the interdependencies existing between these disciplines should be exploited to unlock as much program-level value potential as possible. An enterprise-level approach should, therefore, be used. Finally, suborbital tourism programs are well suited as a case study for this research. Indeed, they are usually small companies starting their projects from scratch. Using a full enterprise-level analysis is thus necessary, but also more easily feasible than for larger groups. These motivations lead to the formulation of the research objective: to establish a methodology that enables informed enterprise-level decision-making under uncertainty and provides higher-value compromise solutions. The research objective can be decomposed into two main directions of study. First, current approaches are usually limited to the design aspect of the program and do not provide the optimization of other disciplines. This ultimately results in a de-facto sequential optimization, where principal-agent problems arise. Instead, a holistic implementation is proposed, which will enable an integrated enterprise-level optimization. The second part of this problem deals with decision-making with multiple objectives and multiple risks. Current methods of design under uncertainty are insufficient for this problem. First, they do not provide compelling results when several metrics are targeted. Additionally, variance does not properly fit the definition of risk, as it captures both the upside and downside uncertainty. Instead, the deviation of the Conditional Value at Risk (called here downside deviation) is used as a measure of value risk. Furthermore, objectives are categorized and aggregated into risk and value scores to facilitate convergence, visualization, and decisionmaking. As suborbital vehicles are complex non-linear systems, with many infeasible concepts and computationally expensive M&S environments, a time-efficient way to estimate the downside deviation needs to be used. As such, a new uncertainty propagation structure is used that involves regression and classification neural networks, as well as a Second-Order Third-Moment (SOTM) technique to compute statistical moments. The proposed process elements are combined, and integrated into a method following a modified Integrated Product and Process Development (IPPD) approach, using five main steps: establishing value, generating alternatives, evaluating alternatives, and making decisions. A new M&S environment is implemented and involves a design framework to which several business disciplines are added. A bottom-up approach is used to study the four research questions of this dissertation. At the lowest level of the implementation, an enhanced financial analysis is evaluated. Common financial valuation methods used in aerospace have heavy limitations: all of them rely on a very arbitrary discount rate despite its critical impact on the final value of the NPV. The proposed method provides detailed analysis capabilities and helps capture more value by enabling the optimization of the company’s capital structure. A sensitivity analysis also verifies the importance of the added factors in the proposed method. The second implementation step is to time-efficiently evaluate downside deviation. As such, regression and classification neural networks are implemented to estimate the base costs of the vehicle and speed up the vehicle sizing process. Business analyses are already time-efficient and therefore maintained. These neural networks ultimately show good validation prediction root-mean-square error (RMSE), which confirms their accuracy. The SOTM method is also checked and shows a downside deviation prediction accuracy equivalent to a 750-point Monte Carlo method. From a computation time standpoint, the use of neural networks is required for a reasonable convergence time, and the SOTM used jointly with neural networks results in an optimization time below 1 hour. The proposed approach for making risk/value trade-offs in the presence of multiple risks and objectives is then tested. First, the importance of using downside deviation is demonstrated by showing the risk estimation error made when using the standard deviation rather than the actual downside deviation. Additionally, the use of risk and value scores also helps decision-making from a qualitative and quantitative point of view. Indeed, it facilitates visualization by supplying a two-dimensional Pareto frontier, while still being able to color it to observe program features and cluster patterns. Furthermore, the problem with risk and value scores provides more optimal solutions, compared to the non-aggregated case, unless very large errors in weightings are committed. Finally, the proposed method provides good capabilities for identifying, ranking, and selecting optimal concepts. The last research question presents the following interrogation: does an enterpriselevel approach help improve the optimality of the overall program, and does it result in significantly different decision-making? Two elements of the enterprise-level approach are tested: the integrated optimization, and the use of additional enterprise-level objectives. In both cases, the resulting Pareto frontiers are significantly dominating their counterparts, demonstrating the usefulness of the enterprise-level approach from a quantitative point of view. It also shows that the enterprise-level approach results in significantly different decisions, and should, therefore, be applied early in the design process. Hence, the method provided the capabilities sought in the research objective. This research resulted in contributions in the financial analysis of aerospace programs, in design under multiple sources of uncertainty with multiple objectives, and in design optimization by proposing the adoption of an enterprise-level approach.

Thumbnail Image
Item

Hierarchical finite element method for the prognostic analysis of structural health monitoring

2017-05-23 , Park, Youngchul

The structural design of vehicles has become lighter but stronger because of new materials and more precise analysis of structural safety. In aerospace applications for novel lightweight structures, manufacturers design their aircraft to ensure design safety based on regulations of damage tolerance design, which assumes the existence of cracks and the ability to sustain defects until periodic maintenance. However, design regulations guide only the possibility of cracks based on standard history operation regardless of the current condition of individual aircraft. Since the individual aircraft has a unique condition of operation, the study of structural health monitoring suggests identifying any defects early as possible and taking corrective action early to minimize the operation cost and the maintenance cost. The structural health monitoring examines the current state of individual aircraft and simulates the propagation of cracks under various conditions through a computational model referred to as the digital twin, which represents the actual aircraft and plays a key role in a reliable simulation of crack propagation in structural health monitoring. Since defects or crack sizes are minuscule, the finite element of digital twin requires an extremely fine mesh, which requires enormous computation time, then a general finite element method is not suitable for analyzing the behavior of micro cracks. Therefore, this study proposes a new methodology, the hierarchical finite element method (HFEM), to solve the problem of adaptable mesh size and computation time. The HFEM first build a connection of hierarchical models in a pre-processing stage and transfer forces to hierarchical models in a post-processing stage. In the pre-processing stage of the HFEM, classes of finite element model are categorized into three levels: a micro-level, a base-level, and a system-level model. The system-level model is an entire system of a structure with an appropriate element size The HFEM first partitions the various shape and size of the component elements in the system-level model into k clusters using a K-means clustering algorithm. Each cluster center is a candidate for the base-level model, which conducts the crack simulation with fine meshes. Each candidate of the base-level model has a stress distribution map generated by six components of unit loads. In the case of composite materials, the stress map contains additional information, amplification factors, from the micro-level model by applying six components of unit loads on the candidate unit cells, which depend on fiber array types. Post-processing is the simulation of the prognosis for the digital twin with actual aerodynamic loads to predict the remaining life based on the crack propagation analysis. This study focuses on the prognosis related to the crack propagation in the base-level model of HFEM. The constituent element in the base-level model behaves individually with cellular automata rules, an effective way of simulating complex systems without central control. The simulation emerges to a stable or unstable point according to simple local operating rules and neighboring effects. This study investigates local rules and neighboring effects for the crack propagation to employ it into the crack simulation in the HFEM. Crack simulation advances the analysis by regarding an element as a dead cell if the residual strength, calculated based on the fracture mechanics, reaches a critical factor. A mathematical crack-closure model determines critical factors with a size of influence region and the effect of neighboring elements, the former of which determines its life cycle. In the virtual simulation, the digital twin, adopting the cyclic load from virtual sensors, will analyze the remaining life of the aircraft. However, the remaining life is not able to be predicted with currently acquired cyclic loads because it requires future cyclic loads. Therefore, this study proposes an inverse transport wing standard (TWIST) method to forecast the future time-series of cyclic loads based on current loads. TWIST is a spectrum generating method developed to standardize a stress spectrum of transport aircraft by using historical data of fatigue life. In this study, as an opposite way of the TWIST, acquired data from sensors that are samples of standard spectrum generates a predicted historical data. Thus, the digital twin with the stress spectrum of predicted historical data enables the prognosis of remaining life based on accumulated data. The affordability of the HFEM for the prediction of remaining life will be a turning point from inspections based on schedules to inspections based on demand approach. Further, the HFEM enables us to evaluate complex decision making for a damaged aircraft component with reliable information of the current condition.

Thumbnail Image
Item

Formulation of an uncertainty based methodology for advanced technology performance prediction

2017-04-06 , Schwartz, Henry D.

Challenges within the aviation industry stem from interdependencies between environmental goals that require engineers to make trade-offs between them. When faced with multi-objective problems like these, engineers and decision makers need the ability to rapidly understand how making changes to one variable affects all the objectives simultaneously. A key enabler in the development of a credible performance estimation tool that can be used to parametrically explore large areas of the design space. To ensure the credibility of the tool, it must include a traceable and transparent prediction of the uncertainty throughout the space. This will enable engineers and decision makers to parametrically explore the design space while giving them an understanding of the confidence level of the prediction. Additionally, by including the level of uncertainty throughout the design space, decision makers can apply additional resources for experimentation more efficiently by applying them where there is a high level of uncertainty. The creation of a modeling environment for an advanced concept is challenging because a lot of data is needed. Unfortunately, it is difficult to obtain this data for advanced concepts. High order computational models or physical experiments are used sparingly in the early phases of design. In contrast, lower order methods are fast and inexpensive, but they lack credibility. One way of decreasing the computational effort and time associated with high fidelity simulations is to use multifidelity methods which utilize information from disparate sources of data at multiple fidelity levels. Low fidelity methods are run throughout large areas of the design space and then augmented with sparse high fidelity data to create a more accurate model. Therefore, the research objective for this thesis is to develop a methodology to characterize the uncertainty throughout the design space based on the relative location of the desired design to the high fidelity designs when given resulting uncertainty distributions from multiple data sources. Bayesian model averaging is a common multifidelity method used to synthesize probabilistic data sets. However, Bayesian model averaging does not work well with sparse data sets because a correction surrogate and a likelihood surrogate need to be generated which requires large amounts of high fidelity data. The method presented in this research utilizes a unique proximity based biasing process to combine the data sets that does not require two separate surrogates to be generated. A Monte Carlo method is then used to propagate the uncertainty throughout the entire design space. Comparisons are made between the method presented in this research and Bayesian model averaging for the prediction of the lift coefficient of a wing section. The results show that the level of inferred uncertainty from the Bayesian model averaging method is approximately 20% more than the method developed by this research. In Addition, the method developed by this research is applied to the performance of Hamilton Standard propellers to demonstrate the method on a representative real world problem.

Thumbnail Image
Item

Integrated architecture analysis and technology evaluation for systems of systems modeled at the subsystem level

2017-11-13 , Trent, Douglas James

A lack of knowledge during conceptual design results in two primary challenges: overruns in cost and schedule due to frequent design changes and combinatorial explosion of alternatives due to large, discrete categorical design spaces. Due to the significant impact subsystem-level technologies have on the cost and schedule of a design, they should be considered during the conceptual design of systems of systems in an effort to reduce this lack of knowledge. To integrate architecture analysis and technology evaluation at the subsystem level, several questions and hypotheses are posed during a discussion of a general concept exploration process to guide the development of a new framework. The Dynamic Rocket Equation Tool (DYREQT) and a collection of subsystem-level in-space transportation models were developed to provide a modeling and simulation environment capable of producing the necessary data for experimentation. DYREQT provides the capability to integrate user-developed subsystem models for space transportation architecture analysis and design. Results from the experiments led to conclusions which guided the definition of the Integrated Architecture and Technology Exploration (IntegrATE) framework. This new framework enables integrated architecture analysis and technology evaluation at the subsystem level in an effort to increase design knowledge during the conceptual design process. IntegrATE provides flexibility such that it can be tailored to a wide range of problems. It also provides a high degree of transparency throughout to help reduce the likelihood of bias towards individual architectures or technologies. Finally, the IntegrATE framework and DYREQT were demonstrated on a notional manned Mars 2033 design study to highlight the utility of these new developments.

Thumbnail Image
Item

Methodology for interoperability-enabled adaptable strategic fleet mix planning

2017-07-28 , Bernstein, Shai

Fleet sizing and mix problems, together with fleet mix scheduling problems, are frequently used to plan acquisitions over multiple time periods. However, no fleet sizing and mix problems have assessed the impact of interoperability between assets on the fleet purchasing decision. A challenge to investigating this question is the inability of many such methods to address a more generalized fleet planning problem: one in which fleets that have multi-mission assets and problem scenarios in which mission modeling at the operational level involves assets numbers that are orders of magnitude lower than at the strategic acquisitions level. Furthermore, in strategic decision making environments that are characterized as volatile, uncertain, complex, and ambiguous, prior approaches frequently do not take into account whether a decision set is adaptable to changing mission or budget priorities. In this work, a methodology is created to allow the investigation of the effects of interoperability on the fleet purchasing decision by first addressing the gaps in prior methods. A fleet scaling method is developed in order to bridge the gap between operational-level missions and strategic-level fleets in a way that is computationally inexpensive. Next, a discussion regarding how best to capture trade-offs associated with the adaptability of fleet plans leads to the adaptation of a method from decision-theory literature to the problem. Finally, requirements and criteria for capturing the effects of interoperability modeling are created. This methodology, which serves as an initial framework for assessing this large problem, is instantiated with existing methods where possible to show that it does indeed enable the desired investigation to be conducted. A sample case study based on two World War II operations is used to form the basis of the multi-mission analysis, and to walk through each step of the methodology. The sample study compares the effects of interoperability on fleet plan adaptability compared to other asset design variables, the number of assets, and fleet cost. Interoperability is shown to be a significant enough effect in this simple example that there is evidence for including it in future assessments of fleet plan adaptability. Furthermore, the usefulness of a unified fleet plan adaptability methodology that can account for asset requirements, mission capability, and budget and mission preference uncertainty, is demonstrated.

Thumbnail Image
Item

An uncertainty quantification and management methodology to support rework decisions in multifidelity aeroelastic load cycles

2017-04-07 , Johnson, Brandon James

Cost overruns and schedule delays have plagued almost all major aerospace development programs and have resulted in billions of dollars lost. Design rework has attributed to these problems and one approach to mitigating this risk is reducing uncertainty. Failing to meet requirements during flight test results in one of the most significant and costly rework efforts. This type of rework is referred to as major rework and the main purpose of this thesis is to reduce this risk by improving the loads analysis process. Loads analysis is a crucial part of the design process for aerospace vehicles. Its main objective is to determine the worst-case loading conditions which will realistically be experienced in normal and abnormal flight operations. These conditions are called critical loads. With this information, a structure is designed and optimized to withstand such loads and certify the design. Observing the current approach to loads analysis has revealed some shortcomings related to uncertainty and the allocation of load and structural margins. The fields of uncertainty quantification and uncertainty management were chosen to address these limitations and a framework was proposed to support decisions for rework in loads analysis. Key aspects of the framework include utilizing a Bayesian network for modeling the loads process as well as propagating various uncertainty sources to the system response. Bayesian-based resource allocation optimization is another key aspect and used to reduce and manage uncertainty. Finally, the goal of the framework is to determine the optimal tradeoffs between aerodynamic fidelity and margin allocation to minimize the risk of major rework while considering their respective costs under a finite budget. Assigning costs related to fidelity and margins are intended to reflect the users' prioritization of uncertainty, computational cost and performance degradation through weight penalties. The demonstration model is the undeformed Common Research Model (uCRM) wing, which is representative of a transonic wide-body commercial transport. The modeling and simulation environment is multidisciplinary and anchored in three software programs to perform various analyses: NASCART-GT for computational fluid dynamics; NASTRAN for doublet-lattice method aerodynamics, structural analysis and aeroelastic analysis; and HyperSizer for failure analysis and structural optimization. Four experiments were conducted related to epistemic uncertainty quantification, uncertainty propagation and sensitivity analysis via Bayesian network, developing an uncertainty management system based on resource allocation for loads analysis and finally experiments to optimize and evaluate the overall framework against seven design scenarios to explore a potential decision makers' varying priorities and against a baseline model representing the current approach. Key findings reveal the structural required margins are the dominant factor in reducing the risk of rework but the aerodynamic fidelity and load margin are important for balancing performance and uncertainty when considering financial implications within a finite budget. The contributions of this thesis to the aerospace engineering community include; an integrated modeling and simulation environment for the load analysis process and structural design, uniquely applying and developing a Bayesian network for efficient uncertainty modeling and propagation and a viable cost-based uncertainty management system for loads analysis, among others.

Thumbnail Image
Item

A deep learning and parallel simulation methodology for air traffic management

2017-08-24 , Kim, Young Jin

Air traffic management is widely studied in several different fields because of its complexity and criticality to a variety of stakeholders including passengers, airlines, regulatory agencies, air traffic controllers, etc. However, the exploding amount of air traffic in recent years has created new challenges to ensure effective management of the airspace. A fast time simulation capability with high accuracy is essential to effectively explore the consequences of decisions from the airspace design phase to the air traffic management phase. In this thesis, two key components for enabling intelligent decision support are proposed and studied. To accelerate fast time simulations, a time-parallel simulation approach has been studied and applied to air traffic network simulation in addition to exploitation of spatial parallel simulation. This approach splits the simulation time axis into time intervals and simulates the intervals concurrently potentially achieving a high level of parallelism. This approach requires a way to ensure that the distributed simulation takes into account dependencies across time periods. A methodology to address this issue is proposed. The proposed time-parallel algorithm works seamlessly with the spatial parallel approach. In particular, the synchronization algorithm used for the spatial parallel simulation is integrated with the time-parallel simulation algorithm. In this thesis, an efficient algorithm spanning these aspects of the distributed simulations is proposed and implemented. The implemented simulation is tested in a variety of scenarios and balances time and spatial parallelism to improve speed up. As another aspect, to predict the future scenarios more accurate, it is necessary to feed the appropriate input vales to the simulation program. This input can be acquired by learning the previous patterns in data, statistically. Recent improvements in machine learning and artificial intelligence research enable an accurate prediction of the future state variables in the air traffic network system. Recurrent neural network is one type of algorithm which can effectively model sequential state variables. In that sense, a recurrent neural network approach is proposed for modeling the input of each simulation scenario. By utilizing a large amount of historical flight and weather data, the proposed recurrent neural network model learns the best parameters in the model to predict the future status of the airports in the National Airspace System (NAS). In particular, airports’ daily capacity in the future is a key input variable for the NAS simulation model. The proposed model is trained to accurately predict the airports’ daily capacity. Based on real world air traffic data, the improvements in the performance and the accuracy of both techniques have been investigated and presented. The proposed approaches show significant improvements for supporting air traffic management decision making.

Thumbnail Image
Item

Quantifying the impacts of vehicle technologies and operational improvements on air transportation system performance

2017-05-31 , Hassan, Mohammed

Over the past decades, passenger demand for air transportation has grown steadily. Aviation forecasts predict a continued growth into the future at possibly higher rates. The consequent rise in number of flights would undoubtedly lead to an increase in fuel consumption, emissions, and airport noise levels; environmental effects that regulatory bodies have been striving to limit. Among the solutions considered by the aviation industry to mitigate the adverse environmental impacts of demand growth are vehicle technologies and operational improvements. The former aims to enhance aircraft vehicle-level performance, while the latter seeks both vehicle-level and system-level enhancements. The primary research objective of this thesis is to provide a methodological framework that incorporates both vehicle technologies and operational improvements in order to evaluate their projected impacts on air transportation system performance. Both technological and operational solutions have been investigated in the past; however, independently. The existing inter-dependencies between both solutions have been largely considered insignificant and thus, disregarded. Consequently, to compute the total impact on system performance resulting from implementing both solutions, current assessments analyze them independently and simply sum the individual contributions. This thesis focuses on the inter-dependencies between vehicle technologies and operational improvements and argues that: 1) they should not be generally disregarded in performance evaluations of the aviation system, and 2) they can be exploited to further enhance system performance. Those two arguments are posed as a single hypothesis, which is tested using the methodological framework. There are two main contributions for this thesis. First, the development of an all- encompassing capability that evaluates system-level performance at reasonable accuracy and manageable uncertainty. Stakeholders and policy makers are better informed about the potential system-level impacts of various technological and operational solutions. As a consequence, future investment and resource allocation decisions could be impacted. The second major contribution of this thesis is testing the commonly accepted assumption regarding the independence of technologies and operations. The thesis indeed argues that independence should not be generally assumed. Therefore, technologies and operations need to be considered simultaneously in order to account for their inter-dependencies.

Thumbnail Image
Item

A methodology for structural technology performance characterization to enable reduction of structural uncertainty

2017-04-06 , Corman, Jason A.

Government programs have been established to identify solutions that will significantly reduce the impact of aviation on the environment in the upcoming generations. Airframe structural technologies were identified as a category of potential solutions to meet environmental goals in the N+2 timeframe, and these technologies are assessed and compared by their ability to reduce airframe structural weight. The benchmark approach for structural technology weight reduction, i.e. performance, characterization uses a medium-to-high fidelity physics-based structural weight estimation approach. However, this approach considered only a single conceptual design point, or outer mold line (OML), and single structural layout design when comparing the structural technology to baseline, state of the art structure. It was hypothesized that treating weight reduction performance as a scalar and neglecting its functional relationship with the OML and structural layout was a significant source of epistemic uncertainty. This uncertainty would introduce risk in technology selection and implementation in conceptual design as well as experiment design for structural technology development and demonstration. A significant effort is required to estimate structural technology performance as a function of design spaces rather than a single design point. This thesis presents a repeatable, traceable methodology to characterize the functional performance relationship within a tollgate framework to mitigate expended efforts. Experiments were performed to demonstrate the approach on a test case, the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) technology. These experiments examined performance at the technology level, structural layout level, and OML level design spaces. The impact of the performance relationship on 1) technology selection, 2) technology implementation in conceptual design, and 3) experiment design for technology development was also comparatively assessed to the benchmark scalar approach. It was shown that the ability to quantify the performance function using this methodology for the PRSEUS test case presents a significant advantage over the benchmark for these applications. Error distributions for treating weight reduction as a scalar rather than a function were on the same order as uncertainty distributions representative of a TRL 3 structural technology.