Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 187
  • Item
    Parallel simulation of scale-free networks
    (Georgia Institute of Technology, 2017-08-01) Nguyen, Thuy Vy Thuy ; Fujimoto, Richard M. ; Vuduc, Richard ; Swenson, Brian ; Computational Science and Engineering
    It has been observed that many networks arising in practice have skewed node degree distributions. Scale-free networks are one well-known class of such networks. Achieving efficient parallel simulation of scale-free networks is challenging because large-degree nodes can create bottlenecks that limit performance. To help address this problem, we describe an approach called link partitioning where each network link is mapped to a logical process in contrast to the conventional approach of mapping each node to a logical process.
  • Item
    AI-infused security: Robust defense by bridging theory and practice
    (Georgia Institute of Technology, 2019-09-20) Chen, Shang-Tse ; Chau, Duen Horng ; Balcan, Maria-Florina ; Lee, Wenke ; Song, Le ; Roundy, Kevin A. ; Cornelius, Cory ; Computational Science and Engineering
    While Artificial Intelligence (AI) has tremendous potential as a defense against real-world cybersecurity threats, understanding the capabilities and robustness of AI remains a fundamental challenge. This dissertation tackles problems essential to successful deployment of AI in security settings and is comprised of the following three interrelated research thrusts. (1) Adversarial Attack and Defense of Deep Neural Networks: We discover vulnerabilities of deep neural networks in real-world settings and the countermeasures to mitigate the threat. We develop ShapeShifter, the first targeted physical adversarial attack that fools state-of-the-art object detectors. For defenses, we develop SHIELD, an efficient defense leveraging stochastic image compression, and UnMask, a knowledge-based adversarial detection and defense framework. (2) Theoretically Principled Defense via Game Theory and ML: We develop new theories that guide defense resources allocation to guard against unexpected attacks and catastrophic events, using a novel online decision-making framework that compels players to employ ``diversified'' mixed strategies. Furthermore, by leveraging the deep connection between game theory and boosting, we develop a communication-efficient distributed boosting algorithm with strong theoretical guarantees in the agnostic learning setting. (3) Using AI to Protect Enterprise and Society: We show how AI can be used in real enterprise environment with a novel framework called Virtual Product that predicts potential enterprise cyber threats. Beyond cybersecurity, we also develop the Firebird framework to help municipal fire departments prioritize fire inspections. Our work has made multiple important contributions to both theory and practice: our distributed boosting algorithm solved an open problem of distributed learning; ShaperShifter motivated a new DARPA program (GARD); Virtual Product led to two patents; and Firebird was highlighted by National Fire Protection Association as a best practice for using data to inform fire inspections.
  • Item
    Optimizing resource allocation in computational sustainability: Models, algorithms and tools
    (Georgia Institute of Technology, 2021-01-21) Gupta, Amrita ; Dilkina, Bistra ; Chau, Duen Horng ; Catalyurek, Umit ; Fuller, Angela ; Morris, Dan ; Computational Science and Engineering
    The 17 Sustainable Development Goals laid out by the United Nations include numerous targets as well as indicators of progress towards sustainable development. Decision-makers tasked with meeting these targets must frequently propose upfront plans or policies made up of many discrete actions, such as choosing a subset of locations where management actions must be taken to maximize the utility of the actions. These types of resource allocation problems involve combinatorial choices and tradeoffs between multiple outcomes of interest, all in the context of complex, dynamic systems and environments. The computational requirements for solving these problems bring together elements of discrete optimization, large-scale spatiotemporal modeling and prediction, and stochastic models. This dissertation leverages network models as a flexible family of computational tools for building prediction and optimization models in three sustainability-related domain areas: 1) minimizing stochastic network cascades in the context of invasive species management; 2) maximizing deterministic demand-weighted pairwise reachability in the context of flood resilient road infrastructure planning; and 3) maximizing vertex-weighted and edge-weighted connectivity in wildlife reserve design. We use spatially explicit network models to capture the underlying system dynamics of interest in each setting, and contribute discrete optimization problem formulations for maximizing sustainability objectives with finite resources. While there is a long history of research on optimizing flows, cascades and connectivity in networks, these decision problems in the emerging field of computational sustainability involve novel objectives, new combinatorial structure, or new types of intervention actions. In particular, we formulate a new type of discrete intervention in stochastic network cascades modeled with multivariate Hawkes processes. In conjunction, we derive an exact optimization approach for the proposed intervention based on closed-form expressions of the objective functions, which is applicable in a broad swath of domains beyond invasive species, such as social networks and disease contagion. We also formulate a new variant of Steiner Forest network design, called the budget-constrained prize-collecting Steiner forest, and prove that this optimization problem possesses a specific combinatorial structure, restricted supermodularity, that allows us to design highly effective algorithms. In each of the domains, the optimization problem is defined over aspects that need to be predicted, hence we also demonstrate improved machine learning approaches for each.
  • Item
    A Roofline Model of Energy
    (Georgia Institute of Technology, 2012) Choi, Jee Whan ; Vuduc, Richard ; Georgia Institute of Technology. College of Computing ; Georgia Institute of Technology. School of Computational Science and Engineering ; Georgia Institute of Technology. School of Electrical and Computer Engineering
    We describe an energy-based analogue of the time-based roofline model of Williams, Waterman, and Patterson (Comm. ACM, 2009). Our goal is to explain—in simple, analytic terms accessible to algorithm designers and performance tuners—how the time, energy, and power to execute an algorithm relate. The model considers an algorithm in terms of operations, concurrency, and memory traffic; and a machine in terms of the time and energy costs per operation or per word of communication. We confirm the basic form of the model experimentally. From this model, we suggest under what conditions we ought to expect an algorithmic time-energy trade-off, and show how algorithm properties may help inform power management.
  • Item
    ExactMP: An Efficient Parallel Exact Solver for Phylogenetic Tree Reconstruction Using Maximum Parsimony
    (Georgia Institute of Technology, 2006-02-26) Bader, David A. ; Chandu, Vaddadi P. ; Yan, Mi
    Constructing phylogenetic trees in the study of the evolutionary history of a group organisms is an extremely challenging problem in computational biology. The problem becomes intractable with growing number of organisms. In this paper, we design and implement an efficient parallel solver (ExactMP) using a parsimony based approach for solving this problem. We create a testbed consisting of eighteen datasets of varying size (up to 27 taxa) and difficulty level (easy to hard), containing real (Eukaryotes, Metazoan, and rbcL) and randomly-generated synthetic genome sequences. We demonstrate our ExactMP Solver against this testbed and achieve a parallel speedup of up to 7.26 with 8 processors using an 8-way symmetric multiprocessor. The main contributions of this work are: (1) an efficient parallel solver ExactMP for the problem of phylogenetic tree reconstruction using maximum parsimony, (2) a new upper bounding methodology for this problem using heuristic and randomization techniques, and (3) a highly optimized branch and bound algorithm for this problem.
  • Item
    Long read mapping at scale: Algorithms and applications
    (Georgia Institute of Technology, 2019-04-01) Jain, Chirag ; Aluru, Srinivas ; Konstantinidis, Konstantinos T. ; Catalyurek, Umit ; Phillippy, Adam M. ; Jordan, King ; Computational Science and Engineering
    Capability to sequence DNA has been around for four decades now, providing ample time to explore its myriad applications and the concomitant development of bioinformatics methods to support them. Nevertheless, disruptive technological changes in sequencing often upend prevailing protocols and characteristics of what can be sequenced, necessitating a new direction of development for bioinformatics algorithms and software. We are now at the cusp of the next revolution in sequencing due to the development of long and ultra-long read sequencing technologies by Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT). Long reads are attractive because they narrow the scale gap between sizes of genomes and sizes of sequenced reads, with the promise of avoiding assembly errors and repeat resolution challenges that plague short read assemblers. However, long reads themselves sport error rates in the vicinity of 10-15%, compared to the high accuracy of short reads (< 1%). There is an urgent need to develop bioinformatics methods to fully realize the potential of long-read sequencers. Mapping and alignment of reads to a reference is typically the first step in genomics applications. Though long read technologies are still evolving, research efforts in bioinformatics have already produced many alignment-based and alignment-free read mapping algorithms. Yet, much work lays ahead in designing provably efficient algorithms, formally characterizing the quality of results, and developing methods that scale to larger input datasets and growing reference databases. While the current model to represent the reference as a collection of linear genomes is still favored due to its simplicity, mapping to graph-based representations, where the graph encodes genetic variations in a human population also becomes imperative. This dissertation work is focused on provably good and scalable algorithms for mapping long reads to both linear and graph references. We make the following contributions: 1. We develop fast and approximate algorithms for end-to-end and split mapping of long reads to reference genomes. Our work is the first to demonstrate scaling to the entire NCBI database, the collection of all curated and non-redundant genomes. 2. We generalize the mapping algorithm to accelerate the related problems of computing pairwise whole-genome comparisons. We shed light on two fundamental biological questions concerning genomic duplications and delineating microbial species boundaries. 3. We provide new complexity results for aligning reads to graphs under Hamming and edit distance models to classify the problem variants for which existence of a polynomial time solution is unlikely. In contrast to prior results that assume alphabets as a function of the problem size, we prove that the problem variants that allow edits in graph remain NP-complete for even constant-sized alphabets, thereby resolving computational complexity of the problem for DNA and protein sequence to graph alignments. 4. Finally, we propose a new parallel algorithm to optimally align long reads to large variation graphs derived from human genomes. It demonstrates near linear scaling on multi-core CPUs, resulting in run-time reduction from multiple days to three hours when aligning a long read set to an MHC human variation graph.
  • Item
    UnMask: Adversarial Detection and Defense in Deep Learning Through Building-Block Knowledge Extraction
    (Georgia Institute of Technology, 2019) Freitas, Scott ; Chen, Shang-Tse ; Chau, Duen Horng ; Georgia Institute of Technology. College of Computing ; Georgia Institute of Technology. School of Computational Science and Engineering
    Deep learning models are being integrated into a wide range of high-impact, security-critical systems, from self-driving cars to biomedical diagnosis. However, recent research has demonstrated that many of these deep learning architectures are highly vulnerable to adversarial attacks—highlighting the vital need for defensive techniques to detect and mitigate these attacks before they occur. To combat these adversarial attacks, we developed UnMask, a knowledge-based adversarial detection and defense framework. The core idea behind UnMask is to protect these models by verifying that an image’s predicted class (“bird”) contains the expected building blocks (e.g., beak, wings, eyes). For example, if an image is classified as “bird”, but the extracted building blocks are wheel, seat and frame, the model may be under attack. UnMask detects such attacks and defends the model by rectifying the misclassification, re-classifying the image based on its extracted building blocks. Our extensive evaluation shows that UnMask (1) detects up to 92.9% of attacks, with a false positive rate of 9.67% and (2) defends the model by correctly classifying up to 92.24% of adversarial images produced by the current strongest attack, Projected Gradient Descent, in the gray-box setting. Our proposed method is architecture agnostic and fast. To enable reproducibility of our research, we have anonymously open-sourced our code and large newly-curated dataset (~5GB) on GitHub (https://github.com/unmaskd/UnMask).
  • Item
    Parallel algorithms for direct blood flow simulations
    (Georgia Institute of Technology, 2012-02-21) Rahimian, Abtin ; Biros, George ; Alben, Silas ; Fernandez-Nieves, Alberto ; Hu, David ; Vuduc, Richard ; Computational Science and Engineering
    Fluid mechanics of blood can be well approximated by a mixture model of a Newtonian fluid and deformable particles representing the red blood cells. Experimental and theoretical evidence suggests that the deformation and rheology of red blood cells is similar to that of phospholipid vesicles. Vesicles and red blood cells are both area preserving closed membranes that resist bending. Beyond red blood cells, vesicles can be used to investigate the behavior of cell membranes, intracellular organelles, and viral particles. Given the importance of vesicle flows, in this thesis we focus in efficient numerical methods for such problems: we present computationally scalable algorithms for the simulation of dilute suspension of deformable vesicles in two and three dimensions. Our method is based on the boundary integral formulation of Stokes flow. We present new schemes for simulating the three-dimensional hydrodynamic interactions of large number of vesicles with viscosity contrast. The algorithms incorporate a stable time-stepping scheme, high-order spatiotemporal discretizations, spectral preconditioners, and a reparametrization scheme capable of resolving extreme mesh distortions in dynamic simulations. The associated linear systems are solved in optimal time using spectral preconditioners. The highlights of our numerical scheme are that (i) the physics of vesicles is faithfully represented by using nonlinear solid mechanics to capture the deformations of each cell, (ii) the long-range, N-body, hydrodynamic interactions between vesicles are accurately resolved using the fast multipole method (FMM), and (iii) our time stepping scheme is unconditionally stable for the flow of single and multiple vesicles with viscosity contrast and its computational cost-per-simulation-unit-time is comparable to or less than that of an explicit scheme. We report scaling of our algorithms to simulations with millions of vesicles on thousands of computational cores.
  • Item
    Parallel Shortest Path Algorithms for Solving Large-Scale Instances
    (Georgia Institute of Technology, 2006-08-30) Madduri, Kamesh ; Bader, David A. ; Berry, Jonathan W. ; Crobak, Joseph R.
    We present an experimental study of parallel algorithms for solving the single source shortest path problem with non-negative edge weights (NSSP) on large-scale graphs. We implement Meyer and Sander's Δ-stepping algorithm and report performance results on the Cray MTA-2, a multithreaded parallel architecture. The MTA-2 is a high-end shared memory system offering two unique features that aid the efficient implementation of irregular parallel graph algorithms: the ability to exploit fine-grained parallelism, and low-overhead synchronization primitives. Our implementation exhibits remarkable parallel speedup when compared with a competitive sequential algorithm, for low-diameter sparse graphs. For instance, Δ-stepping on a directed scale-free graph of 100 million vertices and 1 billion edges takes less than ten seconds on 40 processors of the MTA-2, with a relative speedup of close to 30. To our knowledge, these are the first performance results of a parallel NSSP problem on realistic graph instances in the order of billions of vertices and edges.
  • Item
    Calculation, utilization, and inference of spatial statistics in practical spatio-temporal data
    (Georgia Institute of Technology, 2017-08-02) Cecen, Ahmet ; Kalidindi, Surya R. ; Song, Le ; Garmestani, Hamid ; Chau, Duen Horng ; Kang, Sung H. ; Computational Science and Engineering
    The direct influence of spatial and structural arrangement in various length scales to the performance characteristics of materials is a core premise of materials science. Spatial correlations in the form of n-point statistics have been shown to be very effective in robustly describing the structural features of a plethora of materials systems, with a high number of cases where the obtained futures were successfully used to establish highly accurate and precise relationships to performance measures and manufacturing parameters. This work addresses issues in calculation, representation, inference and utilization of spatial statistics under practical considerations to the materials researcher. Modifications are presented to the theory and algorithms of the existing convolution based computation framework in order to accommodate deformed, irregular, rotated, missing or degenerate data, with complex or non-probabilistic state definitions. Memory efficient personal computer oriented implementations are discussed for the extended framework. A universal microstructure generation framework with the ability to efficiently address a vast variety of geometric or statistical constraints including those imposed by spatial statistics is assembled while maintaining scalability, and compatibility with structure generators in literature.