Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 50
  • Item
    Using sample-based continuation techniques to efficiently compute subspace reachable sets and Pareto surfaces
    (Georgia Institute of Technology, 2019-11-11) Brew, Julian
    For a given continuous-time dynamical system with control input constraints and prescribed state boundary conditions, one can compute the reachable set at a specified time horizon. Forward reachable sets contain all states that can be reached using a feasible control policy at the specified time horizon. Alternatively, backwards reachable sets contain all initial states that can reach the prescribed state boundary condition using a feasible control policy at the specified time horizon. The computation of reachable sets has been applied to many problems such as vehicle collision avoidance, operational safety planning, system capability demonstration, and even economic modeling and weather forecasting. However, computing reachable volumes for general nonlinear systems is very difficult to do both accurately and efficiently. The first contribution of this thesis investigates computational techniques for alleviating the curse of dimensionality by computing reachable sets on subspaces of the full state dimension and computing point solutions for the reachable set boundary. To compute these point solutions, optimal control problems are reduced to initial value problems using continuation methods and then solved. The sample-based continuation techniques are computationally efficient in that they are easily parallelizable. However, the distribution of samples on the reachable set boundary is not directly controlled. The second contribution presents necessary conditions for distributed computation convergence, as well as necessary conditions for curvature- or uniform coverage-based sampling methods. Solutions to multi-objective optimization problems are generally defined using a set of feasible solutions such that for any one objective to improve it is necessary for other objectives to degrade. This suggests there is a connection between the two fields with the potential of cross-fertilization of computational techniques and theory. The third contribution explores analytical connections between reachability theory and multi-objective optimization with investigation into properties, constraints, and special cases.
  • Item
    Experimental investigation of nitrogen oxide production in premixed reacting jets in a vitiated crossflow
    (Georgia Institute of Technology, 2019-09-03) Sirignano, Matthew Davis
    The presented work describes the experimental investigation of nitrogen oxide (NOx) emissions from reacting jets in a vitiated crossflow (RJICF). It is motivated by interest in axial staging of combustion as an approach to reduce undesirable NOx emissions from gas turbine combustors operating at high flame temperatures (>1900K). In lean-premixed combustion, NOx levels are exponential functions of temperature and linear functions of residence time. Consequently, NOx production rates are high at such temperatures, and conventional combustor architectures are unable to simultaneously deliver low NOx and part-load operability. A RJICF is a natural means of implementing axial staging. Therefore, a fuller understanding of the governing processes and parameters regarding pollutant formation within this complex flow field is critical to the next generation of gas turbine technology advancement. It is clear that RJICF NOx production is a highly coupled process. A key challenge was decoupling the interdependent jet parameters in order to observe fundamental NOx production sensitivities. Data is presented for premixed jets injected into a vitiated crossflow of lean combustion products. The jets varied in: fuel selection (methane or ethane or a combination), equivalence ratio (0.8≤ϕjet≤9.0), momentum flux ratio (2≤J≤40), and exit geometry (pipe or nozzle). The crossflow temperatures ranged from 1350K – 1810K, and the reacting jets induced a bulk averaged temperature rise on the flow (ΔT) ranging from 75K – 350K. In addition, several data series were replicated with varied ethane/methane ratios at constant ϕjet to influence flame lifting independent of other parameters. Similarly, the jet exit geometry was varied to influence shear layer vortex growth rates. Overall, these data indicate that NOx emissions are largely determined by ΔT. However, significant variation was observed at constant ΔT levels. The data is consistent with the idea that this variation is controlled by the stoichiometry at which combustion actually occurs, referred to as ϕFlame. ϕFlame is influenced by ϕjet and pre-flame mixing of the jet and crossflow that, in turn, is a function of flame lift-off distance (LO), nozzle geometry, and crossflow temperature. The data highlights the importance of flame lifting as well as the potential importance of post-flame mixing effects. Both are complex problems and are not directly addressed in this work. Further work in these areas would significantly deepen understanding of the relevant phenomena in RJICF NOx production.
  • Item
    Magnetohydrodynamic energy generation and flow control for planetary entry vehicles
    (Georgia Institute of Technology, 2019-05-03) Ali, Hisham K.
    Proposed missions such as a Mars sample return mission and a human mission to Mars require landed payload masses in excess of any previous Mars mission. Whether human or robotic, these missions present numerous engineering challenges due to their increased mass and complexity. To overcome these challenges, new technologies must be developed, and existing technologies advanced. Resource utilization technologies are particularly critical in this effort. This thesis aims to study the reclamation and harnessing of vehicle kinetic energy through magnetohydrodynamic (MHD) interaction with the high temperature entry plasma. Potential mission designs, power generation and power storage configurations are explored, as well as uses for the reclaimed energy. Furthermore, the impact and utility of MHD flow interaction for vehicle control is assessed. The state of the art for analysis of MHD equipped planetary entry systems is advanced, with the specific goals including: development of performance analysis capabilities for potential MHD equipped systems, identification of systems or configurations that show promise as effective uses of MHD power generation, experimental designs for developing technologies applicable to MHD power generation systems, assessment of MHD flow interaction and beneficial use for entry vehicle control through drag modulation, and increasing the technology readiness level of MHD power generation architectures for entry, descent and landing.
  • Item
    A robust methodology for strategically designing environments subject to unpredictable and evolving conditions
    (Georgia Institute of Technology, 2019-01-15) Minier, Ethan T.
    The layout design process, a lean technique, has the potential to provide a manufacturer with significant cost reductions. The major challenge for layout designers is then ensuring this reduction can be maximized, but more so realized when implemented in practice. Guaranteeing this realization requires both the real-life behavior and characteristics of the environment as well as the market and business model conditions to adequately be captured. Unfortunately, current methods fail to accurately capture real-life considerations such as flow path feasibility, they neglect continuous detailed representations of evolving layouts subject to financial restrictions and uncertainty, and moreover they tend to provide insufficient insight into the problem. The objective of this dissertation was then to establish an improved methodology for exploring the design space of a detailed evolving environment, enabling more informed and collaborative design decisions to be made in the presence of evolving and uncertain conditions. In the process of achieving this goal, a three-step methodology (problem initialization, solution, analysis), titled LIVE, is formed. Along with its formation an extensive array of novel methods, revolutionary optimization techniques, and a detailed performance model are developed; all to facilitate effective solution to the uniquely complex and arduous layout problem formulation considered in this dissertation. It is then postulated, that if the problem of designing an environment subject to evolving and uncertain conditions was to be solved with this LIVE methodology, then designers would be capable of making more informed and collaborative design decisions. Substantiation of this is then pursued by systematically testing the methodology and the various models, methods, and solution approaches deployed by it. A series of compounding experiments are performed. During this testing, developed methods are proven to outperform existing approaches, consideration of flow path feasibility is proven imperative, and the novel bimodel multi-stage solution approach deployed by the LIVE methodology is well exercised whereby the best optimization parameter settings to ensure effective solution are identified. Finally, while applying the LIVE methodology to a real-world layout design problem, complete substantiation to the postulated hypothesis is achieved. It is shown that the methodology effectively facilitates improved insight and potential collaboration into the layout design process. The developed performance model proves significant in enabling new insights to be drawn and further for a richer understanding of the operations and layout design to be gained. Overall, the methodology demonstrates its ability to provide an improved layout design process that can effectively handle design problems subject to uncertain and evolving conditions; enabling strategic business decisions to be considered in parallel to the design of the layout.
  • Item
    A methodology for risk-informed launch vehicle architecture selection
    (Georgia Institute of Technology, 2017-11-13) Edwards, Stephen James
    Modern society in the 21st century has become inseparably dependent on human mastery of the near-Earth regions of space. Billions of dollars in on-orbit assets provide a set of fundamental, requisite services to such diverse domains as telecom, military, banking, and transportation. While orbiting satellites provide these services, launch vehicles (LVs) are unquestionably the most critical piece of infrastructure in the space economy value chain. The past decade has seen a significant level of activity in LV development, including some fundamental changes to the industry landscape. Every space-faring nation is engaged in new program developments; most notable, however, is the surge in commercial investments and development efforts, which has been spurred by a combination of private investments by wealthy individuals, new government policies and acquisition strategies, and the increased competition that has resulted from both. In all the LV programs of today, affordability is acknowledged as the single biggest objective. Governments seek assured access to space that can be realized within constrained budgets, and commercial entities vie for survival, profitability, and market-share. From literature, it is clear that the biggest opportunity for affecting affordability resides in improving decision-making early on in the design process. However, a review of historical LV architecture studies shows that very little has changed over the past 50 years in how early architecting decisions are analyzed. In particular, architecture analyses of alternatives are still conducted deterministically, despite uncertainty being at its highest in the very early stages of design. This thesis argues that the ``design freedom'' that exists early on manifests itself as volitional uncertainty during the LV architect's deliberation, motivating the objective statement ``to develop a methodology for enabling risk-informed decision making during the architecture selection phase of LV programs.'' NASA's Risk-Informed Decision Making process is analyzed with respect to the particulars of the LV architecture selection problem. The most significant challenge is found to be LV performance modeling via trajectory optimization, which is not well suited to probabilistic analysis. To overcome this challenge, an empirical modeling approach is proposed. However, this in turn introduces the challenge of generalizing the empirical model, as creating distinct performance models for every architecture concept under consideration is considered infeasible. A review of the main drivers in LV trajectory performance observes T/W not only to be one of the parameters with most sensitivity, but also reveals it to be a functional in its true form. Based on the performance-driving nature of the T/W profile, and the fact that in its infinite-dimensional form it offers a common basis for representing diverse architectures, functional regression techniques are proposed as a potential means of constructing an architecture-spanning empirical performance model. A number of techniques are formulated and tested, and prove capable of supporting the LV performance modeling in support of risk-informed architecture selection.
  • Item
    A risk-value-based methodology for enterprise-level decision-making
    (Georgia Institute of Technology, 2017-07-31) Burgaud, Frederic
    Despite its long lasting existence, aerospace remains a non-commoditized field. To sustain their market domination, the major companies need to commit to large capital investments and constant innovation, in spite of multiple sources of risk and uncertainty, and significant chances of failure. This makes aerospace programs particularly risky. However, successful programs more than compensate the costs of disappointing ones. In order to maximize the chances of a favorable outcome, a business-driven, multi-objective, and multi-risk approach is needed to ensure success, with particular attention to financial aspects. Additionally, aerospace programs involve multiple divisions within a company. Besides vehicle design, finance, sales, and production are crucial disciplines with decision power and influence on the outcome of the program. They are also tightly coupled, and the interdependencies existing between these disciplines should be exploited to unlock as much program-level value potential as possible. An enterprise-level approach should, therefore, be used. Finally, suborbital tourism programs are well suited as a case study for this research. Indeed, they are usually small companies starting their projects from scratch. Using a full enterprise-level analysis is thus necessary, but also more easily feasible than for larger groups. These motivations lead to the formulation of the research objective: to establish a methodology that enables informed enterprise-level decision-making under uncertainty and provides higher-value compromise solutions. The research objective can be decomposed into two main directions of study. First, current approaches are usually limited to the design aspect of the program and do not provide the optimization of other disciplines. This ultimately results in a de-facto sequential optimization, where principal-agent problems arise. Instead, a holistic implementation is proposed, which will enable an integrated enterprise-level optimization. The second part of this problem deals with decision-making with multiple objectives and multiple risks. Current methods of design under uncertainty are insufficient for this problem. First, they do not provide compelling results when several metrics are targeted. Additionally, variance does not properly fit the definition of risk, as it captures both the upside and downside uncertainty. Instead, the deviation of the Conditional Value at Risk (called here downside deviation) is used as a measure of value risk. Furthermore, objectives are categorized and aggregated into risk and value scores to facilitate convergence, visualization, and decisionmaking. As suborbital vehicles are complex non-linear systems, with many infeasible concepts and computationally expensive M&S environments, a time-efficient way to estimate the downside deviation needs to be used. As such, a new uncertainty propagation structure is used that involves regression and classification neural networks, as well as a Second-Order Third-Moment (SOTM) technique to compute statistical moments. The proposed process elements are combined, and integrated into a method following a modified Integrated Product and Process Development (IPPD) approach, using five main steps: establishing value, generating alternatives, evaluating alternatives, and making decisions. A new M&S environment is implemented and involves a design framework to which several business disciplines are added. A bottom-up approach is used to study the four research questions of this dissertation. At the lowest level of the implementation, an enhanced financial analysis is evaluated. Common financial valuation methods used in aerospace have heavy limitations: all of them rely on a very arbitrary discount rate despite its critical impact on the final value of the NPV. The proposed method provides detailed analysis capabilities and helps capture more value by enabling the optimization of the company’s capital structure. A sensitivity analysis also verifies the importance of the added factors in the proposed method. The second implementation step is to time-efficiently evaluate downside deviation. As such, regression and classification neural networks are implemented to estimate the base costs of the vehicle and speed up the vehicle sizing process. Business analyses are already time-efficient and therefore maintained. These neural networks ultimately show good validation prediction root-mean-square error (RMSE), which confirms their accuracy. The SOTM method is also checked and shows a downside deviation prediction accuracy equivalent to a 750-point Monte Carlo method. From a computation time standpoint, the use of neural networks is required for a reasonable convergence time, and the SOTM used jointly with neural networks results in an optimization time below 1 hour. The proposed approach for making risk/value trade-offs in the presence of multiple risks and objectives is then tested. First, the importance of using downside deviation is demonstrated by showing the risk estimation error made when using the standard deviation rather than the actual downside deviation. Additionally, the use of risk and value scores also helps decision-making from a qualitative and quantitative point of view. Indeed, it facilitates visualization by supplying a two-dimensional Pareto frontier, while still being able to color it to observe program features and cluster patterns. Furthermore, the problem with risk and value scores provides more optimal solutions, compared to the non-aggregated case, unless very large errors in weightings are committed. Finally, the proposed method provides good capabilities for identifying, ranking, and selecting optimal concepts. The last research question presents the following interrogation: does an enterpriselevel approach help improve the optimality of the overall program, and does it result in significantly different decision-making? Two elements of the enterprise-level approach are tested: the integrated optimization, and the use of additional enterprise-level objectives. In both cases, the resulting Pareto frontiers are significantly dominating their counterparts, demonstrating the usefulness of the enterprise-level approach from a quantitative point of view. It also shows that the enterprise-level approach results in significantly different decisions, and should, therefore, be applied early in the design process. Hence, the method provided the capabilities sought in the research objective. This research resulted in contributions in the financial analysis of aerospace programs, in design under multiple sources of uncertainty with multiple objectives, and in design optimization by proposing the adoption of an enterprise-level approach.
  • Item
    Ensemble-averaged dynamics of premixed, turbulent, harmonically excited flames
    (Georgia Institute of Technology, 2017-04-07) Humphrey, Luke
    Increasing awareness of the negative impacts of pollutant emissions associated with combustion is driving increasingly stringent regulatory limits. In particular, oxides of nitrogen, generally referred to as NOx, now face strict limits. These restrictions have driven development of cleaner burning combustion systems. Because NOx formation increases significantly at elevated temperatures, one method to reduce NOx emissions is to burn the fuel at lower temperatures. By premixing the fuel and oxidizer prior to combustion significantly lower flame temperatures can be achieved, with corresponding reductions in NOx emissions. Unfortunately, premixed combustion systems are generally more prone to potentially problematic feedback between the unsteady heat release from the flame and unsteady pressure oscillations. This self-excited feedback loop is known as combustion instability. Because these oscillations are associated with unsteady pressure fluctuations they can degrade system performance, limit operability, and even lead to catastrophic failure. Understanding combustion instability is the primary motivation for the work presented in this thesis. The interaction of quasi-coherent and turbulent flame disturbances changes the spatio-temporal flame dynamics and turbulent flame speed, yet this interaction is not fully understood. Therefore, this thesis concentrates on identifying, understanding, and modeling these interactions. In order to address this topic, two primary avenues of research are followed: development and validation of a flame position model and experimental investigations of predicted ensemble-averaged flame speed sensitivity to flame curvature. First, a reduced order modeling approach for turbulent premixed flames is presented, based on the ensemble-averaged flame governing equation proposed by Shin and Lieuwen (2013). The turbulent modeling method is based on the G-equation approach used in laminar flame position and heat release studies. In order to capture the dependence of the ensemble-averaged turbulent flame speed on the ensemble-averaged flame curvature, the turbulent flame model incorporates a flame speed closure proposed by Shin and Lieuwen (2013). Application of the G-equation approach in different coordinate systems requires the inclusion of time-varying integration limits when calculating global flame area. This issue is discussed and the necessary corrections derived. Next, the reduced order turbulent modeling approach is validated by comparison with three-dimensional simulations of premixed flames, for both flame position and heat release response. The reduced order model is the linearized, allowing development of fully analytical flame position and heat release expressions. The use of the flame speed closure is shown to capture nonlinear effects associated with kinematic restoration. Second, the development of and results from a novel experimental facility are described. This facility has the capability to subject premixed flames to simultaneous broadband turbulent fluctuations and narrowband coherent fluctuations, which are introduced on the flame using an oscillating flame holder. Mie scattering images are used to identify the instantaneous flame edge position, while simultaneous high speed PIV measurements provide flow field information. Results from this experimental investigation include analysis of the ensemble-averaged flame dynamics, the ensemble-averaged turbulent displacement speed, the local ensemble-averaged area and consumption speed, and the dependence of both the displacement speed and consumption speed on the ensemble-averaged flame curvature. Finally, the flame speed sensitivity to curvature is quantified through calculation of the normalized turbulent Markstein displacement and consumption numbers. The results show that the amplitude of coherent flame wrinkles generally decreases with both downstream distance and increasing turbulence intensity, providing the first experimental validation of previous isothermal results. The average displacement and consumption speeds increase with downstream distance and turbulence intensity, reflecting the increasing wrinkled flame surface. The ensemble-averaged, phase dependent displacement and consumption speeds demonstrate clear modulation with the shape of the ensemble-averaged flame. Specifically, these turbulent flame speeds increase in regions of negative curvature. For both the displacement and consumption speed, the magnitude of the normalized turbulent Markstein length increases with ratio of the turbulent flame wrinkling length to the coherent wrinkling length when u'/SL0 >2.5 . For u'/SL0 < 2.5 the trends are less clear due to the presence of convecting disturbances which introduce additional fine scale wrinkles on the flame. Together the results presented in this thesis provide a foundation for modeling turbulent flames in the presence of quasi-coherent disturbances. The flame position can be modeled using the ensemble-averaged governing equation with the dynamical flame speed closure, and the corresponding heat release can be calculated from the turbulent consumption speed closure. The turbulent Markstein numbers and uncurved flame speed may be extracted from experimental or numerical data.
  • Item
    Safety supervisory control, model-based hazard monitoring, and temporal logic: Dynamic risk-informed safety interventions and accident prevention
    (Georgia Institute of Technology, 2016-03-14) Favaro, Francesca Margherita M.
    Accident prevention and system safety are important considerations for many industries, especially large-scale hazardous ones such as the nuclear, the chemical, and the aerospace industries. Limitations in the current tools and approaches to risk assessment and accident prevention are broadly recognized in the risk research community. Furthermore, as new technologies and systems are developed, new failure modes can emerge and new patterns by which accidents unfold. A safety gap is growing between the software-intensive technological capabilities of present systems and the still “too much hardware oriented” current approaches for handling risk assessment and safety issues. To overcome these limitations, a novel framework and analytical tools for model-based system safety, or safety supervisory control, is developed to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. This integrated approach rests on two basic pillars: (i) the use of state-space models and state variables (from Control Theory) to capture the dynamics of hazard escalation, and to both model and monitor “danger indices” in a system; and (ii) the adoption of Temporal Logic (TL, from Software Engineering) to model and verify system safety properties (or their violations, hence identify vulnerabilities in a system). The verification of whether the system satisfies or violates the TL safety properties along with the monitoring of emerging hazards provide important feedback for designers and operators to recognize the need for, rank, and trigger safety interventions. In so doing, the proposed approach augments the current perspective of traditional risk assessment with its reliance on probabilities as the basic modeling ingredient with the notion of temporal contingency, a novel dimension here proposed by which hazards are dynamically prioritized and ranked based on the temporal vicinity of their associated accident(s) to being released. Additionally, the online application of the proposed tools and the ensuing insights can support situational awareness and help inform decision-making during emerging hazardous situations. The integrated framework is implemented in Simulink and is capable of combining hardware, software, and operators’ control actions and responses within a single analysis tool, as examined through its detailed application to runway overrun scenarios during rejected takeoffs (RTO). New insights are enabled by the use of temporal logic in conjunction with model-based system safety. For example, new metrics and diagnostic tools to support pilots’ go/no-go decisions and to inform safety guidelines are derived. Limitations exists in the current recommended practice that advises pilots to initiate RTOs only before the decision speed V1 is reached, as suggested by current statistics regarding RTOs accidents and as recognized by aircraft manufacturers. The new proposed metrics are capable of accounting for both situations in which RTOs are initiated below the traditional decision speed V1 and still result in an accident, and situations for which RTOs are initiated above V1 that do not. Moreover, within the context of a detailed case study, a new TL safety constraint is proposed to overcome an identified latent error in the logic of the Full Authority Digital Engine Control (FADEC) at takeoff, which in this case escalated a hazardous condition into a fatal crash. In short, by leveraging tools that are not traditionally employed in risk assessment, the framework and tools proposed offers novel capabilities, complementary to the traditional approaches to risk assessment, and rich possibilities for informing safety interventions (by design and in real-time during operations) and towards improved accident prevention.
  • Item
    Evaluation and automation of space habitat interior layouts
    (Georgia Institute of Technology, 2016-01-13) Simon, Matthew
    Future human exploration missions beyond Earth vicinity will be demanding, requiring highly efficient, mass-constrained systems to reduce overall mission costs and complexity. Additionally, long duration transits in space and lack of Earth abort opportunities will increase the physiological and psychological needs of the crew, which will require larger, more capable systems to ensure astronaut well-being. As a result, the objective of habitat design for these missions is to minimize mass and vehicle size while providing adequate space for all necessary equipment and a functional layout for crew health and productivity. Unfortunately, a literature review of methods for evaluating the performance of habitat interior layout designs (including human-in-the-loop mockup tests, in-depth computer-aided design evaluations, and subjective design evaluation studies) found that they are not currently compatible with the conceptual phase of design or optimization because of the qualitative nature of the comparisons and the significant time required to generate and evaluate each layout. Failure to consider interior layout design during conceptual design can lead to increased mass, compromised functionality, and increased risk to crew; particularly for the mass, cost, and volume-constrained long duration human missions to cislunar space and Mars currently being planned by NASA. A comprehensive and timely quantitative method to measure the effectiveness of interior layouts and track the complex, conflicting habitat design objectives earlier in the design process is desired. A new, structured method and modeling framework to quickly measure the effectiveness of habitat interior designs is presented. This method allows for comparison of layouts at conceptual design and advances research in the previously unavailable capability to automate the generation of habitat interiors. This evaluation method features the development of a comprehensive list of quantifiable habitat layout evaluation criteria, the development of automatic methods to measure these criteria from a geometry model and designer inputs, and the application of systems engineering tools and numerical methods to construct a multi-objective value function measuring the overall habitat layout performance. In particular, this method featured the separation of subjective designer preferences and quantitative evaluation criteria measurements to speed layout evaluations and enable automation of interior layout design subject to a set of designer preferences. This method was implemented through the construction of a software tool utilizing geometry modeling coupled with collision detection techniques to identify favorable layouts subject to multiple constraints and objectives (e.g., minimize mass, maximize contiguous habitable volume, maximize task performance efficiency). Notional cis-lunar habitat layouts were evaluated to demonstrate the effectiveness of the method. Furthermore, stochastic optimization was applied to understand and address difficulties with automated layout design, particularly constraint implementation and convergence behavior. Findings from these investigations and implications for future research are discussed.
  • Item
    A study of magnetoplasmadynamic effects in turbulent supersonic flows with application to detonation and explosion
    (Georgia Institute of Technology, 2015-07-28) Schulz, Joseph C.
    Explosions are a common phenomena in the Universe. Beginning with the Big Bang, one could say the history of the Universe is narrated by a series of explosions. Yet no matter how large, small, or complex, all explosions occur through a series of similar physical processes beginning with their initiation to their dynamical interaction with the environment. Of particular interest to this study is how these processes are modified in a magnetized medium. The role of the magnetic field is investigated in two scenarios. The first scenario addresses how a magnetic field alters the propagation of a gaseous detonation where the application of interest is the modification of a condensed-phase explosion. The second scenario is focused on the aftermath of the explosion event and addresses how fluid mixing changes in a magnetized medium. A primary focus of this thesis is the development of a numerical tool capable of simulating explosive phenomenon in a magnetized medium. While the magnetohydrodynamic (MHD) equations share many of the mathematical characteristics of the hydrodynamic equations, numerical methods developed for the conservation equations of a magnetized plasma are complicated by the requirement that the magnetic field must be divergent free. The advantages and disadvantages of the proposed method are discussed in relation to explosion applications.