Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 8 of 8
  • Item
    Geometric feature extraction in support of the single digital thread approach to detailed design
    (Georgia Institute of Technology, 2016-12-08) Gharbi, Aroua
    Aircraft design is a multi-disciplinary and complicated process that takes a long time and requires a large number of trade-offs between customer requirements, various types of constraints and market competition. Particularly detailed design is the phase that takes most of the time due to the high number of iterations between the component design and the structural analysis that need to be run before reaching an optimal design. In this thesis, an innovative approach for detailed design is suggested. It promotes a collaborative framework in which knowledge from the small scale level of components is shared and transferred to the subsystems and systems level leading to more robust and real time decisions that speed up the design time. This approach is called the Single Digital Thread Approach to Detailed Design or shortly STAnDD. The implementation of this approach is laid over a bottom-up plan, starting from the component level up to the aircraft level. In the component level and from a detailed design perspective, three major operations need to be executed in order to deploy the Single Digital Thread approach. The first one is the automatic geometric extraction of component features from a solid with no design history, the second phase is building an optimizer around the design and analysis iterations and the third one is the automatic update of the solid. This thesis suggests a methodology to implement the first phase. Extracting geometric features automatically from a solid with no history(also called dumb solid) is not an easy process especially in aircraft industry where most of the components have very complex shapes. Innovative techniques from Machine Learning were used allowing a consistent and robust extraction of the data.
  • Item
    Simulations of binary black holes in scalar field cosmologies
    (Georgia Institute of Technology, 2016-08-01) Tallaksen, Katharine Christina
    Numerical relativity allows us to solve Einstein's equations and study astrophysical phenomena we may not be able to observe directly, such as the very early universe. In this work, we examine the effect of scalar field cosmologies on binary black hole systems. These scalar field cosmologies were studied using cosmological bubbles, spherically symmetric structures that may have powered inflationary phase transitions. The Einstein Toolkit and Maya, developed at Georgia Tech, were used to simulate these systems. Systems studied include cosmological bubbles, binary black holes in vacuum, and binary black holes embedded within cosmological bubbles. Differences in mass accretion, merger trajectories, and characteristic gravitational waveforms will be presented for these systems. In the future, analyzing the parameter space of these waveforms may present a method to discover a gravitational wave signature characteristic to these systems and possibly detectable by the Laser Interferometer Gravitational-Wave Observatory.
  • Item
    Agglomerative clustering for community detection in dynamic graphs
    (Georgia Institute of Technology, 2016-05-10) Godbole, Pushkar J.
    Agglomerative Clustering techniques work by recursively merging graph vertices into communities, to maximize a clustering quality metric. The metric of Modularity coined by Newman and Girvan, measures the cluster quality based on the premise that, a cluster has collections of vertices more strongly connected internally than would occur from random chance. Various fast and efficient algorithms for community detection based on modularity maximization have been developed for static graphs. However, since many (contemporary) networks are not static but rather evolve over time, the static approaches are rendered inappropriate for clustering of dynamic graphs. Modularity optimization in changing graphs is a relatively new field that entails the need to develop efficient algorithms for detection and maintenance of a community structure while minimizing the “Size of change” and computational effort. The objective of this work was to develop an efficient dynamic agglomerative clustering algorithm that attempts to maximize modularity while minimizing the “size of change” in the transitioning community structure. First we briefly discuss the previous memoryless dynamic reagglomeration approach with localized vertex freeing and illustrate its performance and limitations. Then we describe the new backtracking algorithm followed by its performance results and observations. In experimental analysis of both typical and pathological cases, we evaluate and justify various backtracking and agglomeration strategies in context of the graph structure and incoming stream topologies. Evaluation of the algorithm on social network datasets, including Facebook (SNAP) and PGP Giant Component networks shows significantly improved performance over its conventional static counterpart in terms of execution time, Modularity and Size of Change.
  • Item
    Implementation and analysis of a parallel vertex-centered finite element segmental refinement multigrid solver
    (Georgia Institute of Technology, 2016-04-28) Henneking, Stefan
    In a parallel vertex-centered finite element multigrid solver, segmental refinement can be used to avoid all inter-process communication on the fine grids. While domain decomposition methods generally require coupled subdomain processing for the numerical solution to a nonlinear elliptic boundary value problem, segmental refinement exploits that subdomains are almost decoupled with respect to high-frequency error components. This allows to perform multigrid with fully decoupled subdomains on the fine grids, which was proposed as a sequential low-storage algorithm by Brandt in the 1970s, and as a parallel algorithm by Brandt and Diskin in 1994. Adams published the first numerical results from a multilevel segmental refinement solver in 2014, confirming the asymptotic exactness of the scheme for a cell-centered finite volume implementation. We continue Brandt’s and Adams’ research by experimentally investigating the scheme’s accuracy with a vertex-centered finite element segmental refinement solver. We confirm that full multigrid accuracy can be preserved for a few segmental refinement levels, although we observe a different dependency on the segmental refinement parameter space. We show that various strategies for the grid transfers between the finest conventional multigrid level and the segmental refinement subdomains affect the solver accuracy. Scaling results are reported for a Cray XC30 with up to 4096 cores.
  • Item
    A framework for automated management of exploit testing environments
    (Georgia Institute of Technology, 2015-12-07) Flansburg, Kevin
    To demonstrate working exploits or vulnerabilities, people often share their findings as a form of proof-of-concept (PoC) prototype. Such practices are particularly useful to learn about real vulnerabilities and state-of-the-art exploitation techniques. Unfortunately, the shared PoC exploits are seldom reproducible; in part because they are often not thoroughly tested, but largely because authors lack a formal way to specify the tested environment or its dependencies. Although exploit writers attempt to overcome such problems by describing their dependencies or testing environments using comments, this informal way of sharing PoC exploits makes it hard for exploit authors to achieve the original goal of demonstration. More seriously, these non- or hard-to-reproduce PoC exploits have limited potential to be utilized for other useful research purposes such as penetration testing, or in benchmark suites to evaluate defense mechanisms. In this paper, we present XShop, a framework and infrastructure to describe environments and dependencies for exploits in a formal way, and to automatically resolve these constraints and construct an isolated environment for development, testing, and to share with the community. We show how XShop's flexible design enables new possibilities for utilizing these reproducible exploits in five practical use cases: as a security benchmark suite, in pen-testing, for large scale vulnerability analysis, as a shared development environment, and for regression testing. We design and implement such applications by extending the XShop framework and demonstrate its effectiveness with twelve real exploits against well-known bugs that include GHOST, Shellshock, and Heartbleed. We believe that the proposed practice not only brings immediate incentives to exploit authors but also has the potential to be grown as a community-wide knowledge base.
  • Item
    Unsupervised learning of disease subtypes from continuous time Hidden Markov Models of disease progression
    (Georgia Institute of Technology, 2015-08-21) Gupta, Amrita
    The detection of subtypes of complex diseases has important implications for diagnosis and treatment. Numerous prior studies have used data-driven approaches to identify clusters of similar patients, but it is not yet clear how to best specify what constitutes a clinically meaningful phenotype. This study explored disease subtyping on the basis of temporal development patterns. In particular, we attempted to differentiate infants with autism spectrum disorder into more fine-grained classes with distinctive patterns of early skill development. We modeled the progression of autism explicitly using a continuous-time hidden Markov model. Subsequently, we compared subjects on the basis of their trajectories through the model state space. Two approaches to subtyping were utilized, one based on time-series clustering with a custom distance function and one based on tensor factorization. A web application was also developed to facilitate the visual exploration of our results. Results suggested the presence of 3 developmental subgroups in the ASD outcome group. The two subtyping approaches are contrasted and possible future directions for research are discussed.
  • Item
    Method and software for predicting emergency department disposition in pediatric asthma
    (Georgia Institute of Technology, 2015-04-21) Kumar, Vikas
    An important application of predictive data mining in clinical medicine is predicting the disposition of patients being seen in the emergency department (ED); such prediction could lead to increased efficiency of our healthcare system. A number of tools have emerged in recent years that use machine learning methods to predict whether patients will be admitted or discharged; however, such models are often limited in that they rely on specialized knowledge, are not optimal, use predictors that are unavailable early in the patient visit, and require memorization of clinical rules and scoring systems. The goal of this study is to develop an effective and practical clinical tool for identifying asthma patients that will be admitted to the hospital. In contrast to existing tools, the model of this study relies on routine knowledge collected early during the patient visit. While most tools specific to asthma are developed using only a few hundred patients, in this study the records of 9,000+ children seen across two major metropolitan emergency departments for asthma exacerbations are used. An unprecedented amount of 70 variables is assessed for predictive strength and early availability; a novel sequence of methods including lasso regularized logistic regression and a modified "best subset" approach is then used to select the final 4-variable model. A web-application is then developed that calculates an admission probability score based on the patient parameters at the point-of-care. The methods and results of this study will be useful for those aiming to develop similar tools as well as ED providers caring for asthma patients.
  • Item
    Enabling collaborative behaviors among cubesats
    (Georgia Institute of Technology, 2011-07-08) Browne, Daniel C.
    Future spacecraft missions are trending towards the use of distributed systems or fractionated spacecraft. Initiatives such as DARPA's System F6 are encouraging the satellite community to explore the realm of collaborative spacecraft teams in order to achieve lower cost, lower risk, and greater data value over the conventional monoliths in LEO today. Extensive research has been and is being conducted indicating the advantages of distributed spacecraft systems in terms of both capability and cost. Enabling collaborative behaviors among teams or formations of pico-satellites requires technology development in several subsystem areas including attitude determination and control subsystems, orbit determination and maintenance capabilities, as well as a means to maintain accurate knowledge of team members' position and attitude. All of these technology developments desire improvements (more specifically, decreases) in mass and power requirements in order to fit on pico-satellite platforms such as the CubeSat. In this thesis a solution for the last technology development area aforementioned is presented. Accurate knowledge of each spacecraft's state in a formation, beyond improving collision avoidance, provides a means to best schedule sensor data gathering, thereby increasing power budget efficiency. Our solution is composed of multiple software and hardware components. First, finely-tuned flight system software for the maintaining of state knowledge through equations of motion propagation is developed. Additional software, including an extended Kalman filter implementation, and commercially available hardware components provide a means for on-board determination of both orbit and attitude. Lastly, an inter-satellite communication message structure and protocol enable the updating of position and attitude, as required, among team members. This messaging structure additionally provides a means for payload sensor and telemetry data sharing. In order to satisfy the needs of many different missions, the software has the flexibility to vary the limits of accuracy on the knowledge of team member position, velocity, and attitude. Such flexibility provides power savings for simpler applications while still enabling missions with the need of finer accuracy knowledge of the distributed team's state. Simulation results are presented indicating the accuracy and efficiency of formation structure knowledge through incorporation of the described solution. More importantly, results indicate the collaborative module's ability to maintain formation knowledge within bounds prescribed by a user. Simulation has included hardware-in-the-loop setups utilizing an S-band transceiver. Two "satellites" (computers setup with S-band transceivers and running the software components of the collaborative module) are provided GPS inputs comparable to the outputs provided from commercial hardware; this partial hardware-in-the-loop setup demonstrates the overall capabilities of the collaborative module. Details on each component of the module are provided. Although the module is designed with the 3U CubeSat framework as the initial demonstration platform, it is easily extendable onto other small satellite platforms. By using this collaborative module as a base, future work can build upon it with attitude control, orbit or formation control, and additional capabilities with the end goal of achieving autonomous clusters of small spacecraft.