Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 108
Thumbnail Image
Item

A data analytics approach to gas turbine prognostics and health management

2010-11-19 , Diallo, Ousmane Nasr

As a consequence of the recent deregulation in the electrical power production industry, there has been a shift in the traditional ownership of power plants and the way they are operated. To hedge their business risks, the many new private entrepreneurs enter into long-term service agreement (LTSA) with third parties for their operation and maintenance activities. As the major LTSA providers, original equipment manufacturers have invested huge amounts of money to develop preventive maintenance strategies to minimize the occurrence of costly unplanned outages resulting from failures of the equipments covered under LTSA contracts. As a matter of fact, a recent study by the Electric Power Research Institute estimates the cost benefit of preventing a failure of a General Electric 7FA or 9FA technology compressor at $10 to $20 million. Therefore, in this dissertation, a two-phase data analytics approach is proposed to use the existing monitoring gas path and vibration sensors data to first develop a proactive strategy that systematically detects and validates catastrophic failure precursors so as to avoid the failure; and secondly to estimate the residual time to failure of the unhealthy items. For the first part of this work, the time-frequency technique of the wavelet packet transforms is used to de-noise the noisy sensor data. Next, the time-series signal of each sensor is decomposed to perform a multi-resolution analysis to extract its features. After that, the probabilistic principal component analysis is applied as a data fusion technique to reduce the number of the potentially correlated multi-sensors measurement into a few uncorrelated principal components. The last step of the failure precursor detection methodology, the anomaly detection decision, is in itself a multi-stage process. The obtained principal components from the data fusion step are first combined into a one-dimensional reconstructed signal representing the overall health assessment of the monitored systems. Then, two damage indicators of the reconstructed signal are defined and monitored for defect using a statistical process control approach. Finally, the Bayesian evaluation method for hypothesis testing is applied to a computed threshold to test for deviations from the healthy band. To model the residual time to failure, the anomaly severity index and the anomaly duration index are defined as defects characteristics. Two modeling techniques are investigated for the prognostication of the survival time after an anomaly is detected: the deterministic regression approach, and parametric approximation of the non-parametric Kaplan-Meier plot estimator. It is established that the deterministic regression provides poor prediction estimation. The non parametric survival data analysis technique of the Kaplan-Meier estimator provides the empirical survivor function of the data set comprised of both non-censored and right censored data. Though powerful because no a-priori predefined lifetime distribution is made, the Kaplan-Meier result lacks the flexibility to be transplanted to other units of a given fleet. The parametric analysis of survival data is performed with two popular failure analysis distributions: the exponential distribution and the Weibull distribution. The conclusion from the parametric analysis of the Kaplan-Meier plot is that the larger the data set, the more accurate is the prognostication ability of the residual time to failure model.

Thumbnail Image
Item

Methods for collaborative conceptual design of aircraft power architectures

2010-07-14 , de Tenorio, Cyril

This thesis proposes an advanced architecting methodology. This methodology allows for the sizing and optimization of aircraft system architecture concepts and the establishment of subsystem development strategies. The process is implemented by an architecting team composed of subsystem experts and architects. The methodology organizes the architecture definition using the SysML language. Using meta-modeling techniques, this definition is translated into an analysis model which automatically integrates subsystem analyses in a fashion that represents the specific architecture concept described by the team. The resulting analysis automatically sizes the subsystems composing it, synthesizes their information to derive architecture-level performance and explores the architecture internal trade-offs. This process is facilitated using the Coordinated Optimization method proposed in this dissertation. This method proposes a multi-level optimization setup. An architecture-level optimizer orchestrates the subsystem sizing optimizations in order to optimize the aircraft as whole. The methodologies proposed in this thesis are tested and demonstrated on a proof of concept based on the exploration of turbo-electric propulsion aircraft concepts.

Thumbnail Image
Item

Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimation

2010-05-21 , Lee, Kyunghoon

The identification of flow characteristics and the reduction of high-dimensional simulation data have capitalized on an orthogonal basis achieved by proper orthogonal decomposition (POD), also known as principal component analysis (PCA) or the Karhunen-Loeve transform (KLT). In the realm of aerospace engineering, an orthogonal basis is versatile for diverse applications, especially associated with reduced-order modeling (ROM) as follows: a low-dimensional turbulence model, an unsteady aerodynamic model for aeroelasticity and flow control, and a steady aerodynamic model for airfoil shape design. Provided that a given data set lacks parts of its data, POD is required to adopt a least-squares formulation, leading to gappy POD, using a gappy norm that is a variant of an L2 norm dealing with only known data. Although gappy POD is originally devised to restore marred images, its application has spread to aerospace engineering for the following reason: various engineering problems can be reformulated in forms of missing data estimation to exploit gappy POD. Similar to POD, gappy POD has a broad range of applications such as optimal flow sensor placement, experimental and numerical flow data assimilation, and impaired particle image velocimetry (PIV) data restoration. Apart from POD and gappy POD, both of which are deterministic formulations, probabilistic principal component analysis (PPCA), a probabilistic generalization of PCA, has been used in the pattern recognition field for speech recognition and in the oceanography area for empirical orthogonal functions in the presence of missing data. In formulation, PPCA presumes a linear latent variable model relating an observed variable with a latent variable that is inferred only from an observed variable through a linear mapping called factor-loading. To evaluate the maximum likelihood estimates (MLEs) of PPCA parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). By virtue of the EM algorithm, the EM-PCA is capable of not only extracting a basis but also restoring missing data through iterations whether the given data are intact or not. Therefore, the EM-PCA can potentially substitute for both POD and gappy POD inasmuch as its accuracy and efficiency are comparable to those of POD and gappy POD. In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data-missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ~ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set.

Thumbnail Image
Item

A computational approach to achieve situational awareness from limited observations of a complex system

2010-04-06 , Sherwin, Jason

At the start of the 21st century, the topic of complexity remains a formidable challenge in engineering, science and other aspects of our world. It seems that when disaster strikes it is because some complex and unforeseen interaction causes the unfortunate outcome. Why did the financial system of the world meltdown in 2008-2009? Why are global temperatures on the rise? These questions and other ones like them are difficult to answer because they pertain to contexts that require lengthy descriptions. In other words, these contexts are complex. But we as human beings are able to observe and recognize this thing we call 'complexity'. Furthermore, we recognize that there are certain elements of a context that form a system of complex interactions - i.e., a complex system. Many researchers have even noted similarities between seemingly disparate complex systems. Do sub-atomic systems bear resemblance to weather patterns? Or do human-based economic systems bear resemblance to macroscopic flows? Where do we draw the line in their resemblance? These are the kinds of questions that are asked in complex systems research. And the ability to recognize complexity is not only limited to analytic research. Rather, there are many known examples of humans who, not only observe and recognize but also, operate complex systems. How do they do it? Is there something superhuman about these people or is there something common to human anatomy that makes it possible to fly a plane? - Or to drive a bus? Or to operate a nuclear power plant? Or to play Chopin's etudes on the piano? In each of these examples, a human being operates a complex system of machinery, whether it is a plane, a bus, a nuclear power plant or a piano. What is the common thread running through these abilities? The study of situational awareness (SA) examines how people do these types of remarkable feats. It is not a bottom-up science though because it relies on finding general principles running through a host of varied human activities. Nevertheless, since it is not constrained by computational details, the study of situational awareness provides a unique opportunity to approach complex tasks of operation from an analytical perspective. In other words, with SA, we get to see how humans observe, recognize and react to complex systems on which they exert some control. Reconciling this perspective on complexity with complex systems research, it might be possible to further our understanding of complex phenomena if we can probe the anatomical mechanisms by which we, as humans, do it naturally. At this unique intersection of two disciplines, a hybrid approach is needed. So in this work, we propose just such an approach. In particular, this research proposes a computational approach to the situational awareness (SA) of complex systems. Here we propose to implement certain aspects of situational awareness via a biologically-inspired machine-learning technique called Hierarchical Temporal Memory (HTM). In doing so, we will use either simulated or actual data to create and to test computational implementations of situational awareness. This will be tested in two example contexts, one being more complex than the other. The ultimate goal of this research is to demonstrate a possible approach to analyzing and understanding complex systems. By using HTM and carefully developing techniques to analyze the SA formed from data, it is believed that this goal can be obtained.

Thumbnail Image
Item

UH-1 corrosion monitoring

2010-11-19 , Kersten, Stephanie M.

As the UH-1 aircraft continue to age, there is growing concern for their structural integrity. With corrosion damage becoming a bigger part of the sustainment picture with increasing maintenance burden and cost, it is becoming increasingly important for corrosion management to be updated with more advanced techniques. The current find-and-fix technique for handling corrosion has many shortfalls, spurring the recent interest in early detection through structural health monitoring. This condition based technique is becoming more prevalent and is recognized for the potential to greatly reduce maintenance cost. Through corrosion monitoring, structural and environmental conditions can be closely observed, preventing excessive maintenance action and saving cost. Searches for corrosion monitoring system designs revealed several commercial companies with prototype systems installed on commercial aircraft, however, details on system design and data analysis were scarce. This study attempted to bridge the gap in literature by providing insight into the development of a corrosion damage prediction model and the design of a corrosion monitoring system. This study attempted to use aircraft maintenance data to make prediction models for determining what corrosion damage an aircraft can expect, given varying operating conditions. Although a reliable prediction model could not be created, trends observed in the data were still valuable for identifying problematic areas of the aircraft. In order to create reliable models, more accurate corrosion data is needed. This can be accomplished through the implementation of a corrosion monitoring system. A custom corrosion monitoring system was designed for the UH-1 aircraft. Commercial off-the-shelf products were fit to the design and a benefits-to-cost analysis was performed for the monitoring system, evaluating the system based on criteria developed from user requirements. The system proved to meet and exceed expectation, making it an ideal choice for the UH-1 aircraft.

Thumbnail Image
Item

A novel numerical analysis of Hall Effect Thruster and its application in simultaneous design of thruster and optimal low-thrust trajectory

2010-07-07 , Kwon, Kybeom

Hall Effect Thrusters (HETs) are a form of electric propulsion device which uses external electrical energy to produce thrust. When compared to various other electric propulsion devices, HETs are excellent candidates for future orbit transfer and interplanetary missions due to their relatively simple configuration, moderate thrust capability, higher thrust to power ratio, and lower thruster mass to power ratio. Due to the short history of HETs, the current design process of a new HET is a largely empirical and experimental science, and this has resulted in previous designs being developed in a narrow design space based on experimental data without systematic investigations of parameter correlations. In addition, current preliminary low-thrust trajectory optimizations, due to inherent difficulties in solution procedure, often assume constant or linear performances with available power in their applications of electric thrusters. The main obstacles come from the complex physics involved in HET technology and relatively small amounts of experimental data. Although physical theories and numerical simulations can provide a valuable tool for design space exploration at the inception of a new HET design and preliminary low-thrust trajectory optimization, the complex physics makes theoretical and numerical solutions difficult to obtain. Numerical implementations have been quite extensively conducted in the last two decades. An investigation of current methodologies reveals that to date, none provide a proper methodology for a new HET design at the conceptual design stage and the coupled low-thrust trajectory optimization. Thus, in the first half of this work, an efficient, robust, and self-consistent numerical method for the analysis of HETs is developed with a new approach. The key idea is to divide the analysis region into two regions in terms of electron dynamics based on physical intuition. Intensive validations are conducted for existing HETs from 1 kW to 50 kW classes. The second half of this work aims to construct a simultaneous design optimization environment though collaboration with experts in low-thrust trajectory optimization where a new HET and associated optimal low-thrust trajectory can be designed simultaneously. A demonstration for an orbit raising mission shows that the constructed simultaneous design optimization environment can be used effectively and synergistically for space missions involving HETs. It is expected that the present work will aid and ease the current expensive experimental HET design process and reduce preliminary space mission design cycles involving HETs.

Thumbnail Image
Item

Self-reconfigurable ship fluid-network modeling for simulation-based design

2010-05-21 , Moon, Kyungjin

Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.

Thumbnail Image
Item

A conceptual methodology for the prediction of engine emissions

2010-11-15 , Rezvani, Reza

Current emission prediction models in the conceptual design phase are based on historical data and empirical correlations. Two main reasons contributing to the current state of emission models are complexity of the phenomena involved in the combustor and relatively low priority of having a more detailed emissions model at the conceptual design phase. However, global environmental concerns and aviation industry growth highlight the importance of improving the current emissions prediction approaches. There is a need to have an emission prediction model in the conceptual design phase to reduce the prediction uncertainties and perform parametric studies for different combustor types and operating conditions. The research objective of this thesis is to develop a methodology to have an initial estimate of gas turbines' emissions, capture their trends and bring more information forward to the conceptual design phase regarding the emission levels. This methodology is based on initial sizing of the combustor and determining its flow-fractions at each section using a 1D flow analysis. A network of elementary chemical reactors is considered and its elements are sized from the results of the 1D flow analysis to determine the level of emissions at the design and operating conditions. Additional phenomena that have significant effects on the prediction of emissions are also considered which are: 1) droplet evaporation and diffusion burning, and 2) fuel-air mixture non-uniformity. A simplified transient model is developed to determine the evaporation rate for a given droplet size distribution and to obtain the amount of vaporized fuel before they ignite. A probabilistic unmixedness model is also employed to consider the range of equivalence ratio distribution for the fraction of the fuel that is vaporized and mixed with air. An emission model is created for the single annular combustor (SAC) configuration and applied to two combustors to test the prediction and parametric capabilities of the model. Both uncertainty and sensitivity analyses are performed to assess the capability of the model to reduce the prediction uncertainty of the model compared to the simpler models without considering the droplet evaporation and mixture non-uniformity. The versatility of the model is tested by creating an emission model for a Rich-Quench-Lean (RQL) combustor, and the results are compared to limited actual data. In general, the approach shows a good performance predicting the NOx emission level compared to CO emission level and capturing their trends. Especially in the RQL combustor case, a more detailed model is required to improve the prediction of the CO emission level.

Thumbnail Image
Item

A methodology for the efficient integration of transient constraints in the design of aircraft dynamic systems

2010-05-21 , Phan, Leon L.

Transient regimes experienced by dynamic systems may have severe impacts on the operation of the aircraft. They are often regulated by dynamic constraints, requiring the dynamic signals to remain within bounds whose values vary with time. The verification of these peculiar types of constraints, which generally requires high-fidelity time-domain simulation, intervenes late in the system development process, thus potentially causing costly design iterations. The research objective of this thesis is to develop a methodology that integrates the verification of dynamic constraints in the early specification of dynamic systems. In order to circumvent the inefficiencies of time-domain simulation, multivariate dynamic surrogate models of the original time-domain simulation models are generated using wavelet neural networks (or wavenets). Concurrently, an alternate approach is formulated, in which the envelope of the dynamic response, extracted via a wavelet-based multiresolution analysis scheme, is subject to transient constraints. Dynamic surrogate models using sigmoid-based neural networks are generated to emulate the transient behavior of the envelope of the time-domain response. The run-time efficiency of the resulting dynamic surrogate models enables the implementation of a data farming approach, in which the full design space is sampled through a Monte-Carlo Simulation. An interactive visualization environment, enabling what-if analyses, is developed; the user can thereby instantaneously comprehend the transient response of the system (or its envelope) and its sensitivities to design and operation variables, as well as filter the design space to have it exhibit only the design scenarios verifying the dynamic constraints. The proposed methodology, along with its foundational hypotheses, is tested on the design and optimization of a 350VDC network, where a generator and its control system are concurrently designed in order to minimize the electrical losses, while ensuring that the transient undervoltage induced by peak demands in the consumption of a motor does not violate transient power quality constraints.

Thumbnail Image
Item

Dynamic cutback optimization

2010-04-15 , Jayaraman, Shankar

The focus of this thesis is to develop and evaluate a cutback noise minimization process - also known as dynamic cutback optimization - that considers engine spool down during thrust cutback and is consistent with ICAO and FAR Part 36 noise certification procedures. Simplified methods for flyover EPNL prediction used by propulsion designers assume instantaneous thrust reduction and do not take into account the spooling down of the engine during the cutback procedure. The thesis investigates if there is an additional noise benefit that can be gained by modeling the engine spool down behavior. This in turn would improve the margin between predicted EPNL and Stage 4 noise regulations. Modeling dynamic cutback also impacts engine design during the preliminary and detailed design stages. Reduced noise levels due to cutback may be traded for lower engine fan diameter, which in turn reduces weight, fuel burn, and cost.