Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 114
  • Item
    Human-centered AI through scalable visual data analytics
    (Georgia Institute of Technology, 2019-11-01) Kahng, Minsuk Brian
    While artificial intelligence (AI) has led to major breakthroughs in many domains, understanding machine learning models remains a fundamental challenge. How can we make AI more accessible and interpretable, or more broadly, human-centered, so that people can easily understand and effectively use these complex models? My dissertation addresses these fundamental and practical challenges in AI through a human-centered approach, by creating novel data visualization tools that are scalable, interactive, and easy to learn and to use. With such tools, users can better understand models by visually exploring how large input datasets affect the models and their results. Specifically, my dissertation focuses on three interrelated parts: (1) Unified scalable interpretation: developing scalable visual analytics tools that help engineers interpret industry-scale deep learning models at both instance- and subset-level (e.g., ActiVis deployed by Facebook); (2) Data-driven model auditing: designing visual data exploration tools that support discovery of insights through exploration of data groups over different analytics stages, such as model comparison (e.g., MLCube) and fairness auditing (e.g., FairVis); and (3) Learning complex models by experimentation: building interactive tools that broaden people's access to learning complex deep learning models (e.g., GAN Lab) and browsing raw datasets (e.g., ETable). My research has made significant impact to society and industry. The ActiVis system for interpreting deep learning models has been deployed on Facebook's machine learning platform. The GAN Lab tool for learning GANs has been open-sourced in collaboration with Google, with its demo used by more than 70,000 people from over 160 countries.
  • Item
    AI-infused security: Robust defense by bridging theory and practice
    (Georgia Institute of Technology, 2019-09-20) Chen, Shang-Tse
    While Artificial Intelligence (AI) has tremendous potential as a defense against real-world cybersecurity threats, understanding the capabilities and robustness of AI remains a fundamental challenge. This dissertation tackles problems essential to successful deployment of AI in security settings and is comprised of the following three interrelated research thrusts. (1) Adversarial Attack and Defense of Deep Neural Networks: We discover vulnerabilities of deep neural networks in real-world settings and the countermeasures to mitigate the threat. We develop ShapeShifter, the first targeted physical adversarial attack that fools state-of-the-art object detectors. For defenses, we develop SHIELD, an efficient defense leveraging stochastic image compression, and UnMask, a knowledge-based adversarial detection and defense framework. (2) Theoretically Principled Defense via Game Theory and ML: We develop new theories that guide defense resources allocation to guard against unexpected attacks and catastrophic events, using a novel online decision-making framework that compels players to employ ``diversified'' mixed strategies. Furthermore, by leveraging the deep connection between game theory and boosting, we develop a communication-efficient distributed boosting algorithm with strong theoretical guarantees in the agnostic learning setting. (3) Using AI to Protect Enterprise and Society: We show how AI can be used in real enterprise environment with a novel framework called Virtual Product that predicts potential enterprise cyber threats. Beyond cybersecurity, we also develop the Firebird framework to help municipal fire departments prioritize fire inspections. Our work has made multiple important contributions to both theory and practice: our distributed boosting algorithm solved an open problem of distributed learning; ShaperShifter motivated a new DARPA program (GARD); Virtual Product led to two patents; and Firebird was highlighted by National Fire Protection Association as a best practice for using data to inform fire inspections.
  • Item
    Energy efficient parallel and distributed simulation
    (Georgia Institute of Technology, 2019-07-26) Biswas, Aradhya
    New challenges and opportunities emerge as computing interacts with our surroundings in unprecedented ways. One of these challenges is the energy consumed by computations and communications. In large cloud-based computing systems, it is a major concern because it forms the largest proportion of the environmental and operational costs of data centers. In mobile systems, it directly impacts battery life. This work focuses on understanding and reducing power and energy consumption of the parallel and distributed execution of discrete event simulations, an area not extensively studied in the past. We first empirically characterize the energy consumption of widely used synchronization algorithms. Then a model and techniques are presented and exercised to create energy profile of a distributed simulation system. These demonstrate that distributed execution and synchronization can incur a significant energy and power overhead. To study and optimize the energy required for distributed execution, a property termed zero-energy synchronization is proposed. A zero-energy synchronization algorithm based on an oracle is presented, and a practical implementation is discussed. A more generic synchronization algorithm termed Low Energy YAWNS (LEY) is also proposed. LEY represents the first attempt to design a synchronization algorithm for energy efficiency and, in principle, can achieve zero-energy synchronization for a large class of distributed simulation applications. To employ the energy efficiency of specialized computing hardware platforms, recurrence relations for simulating G/G/1 queueing networks, directly implementable using library primitives, are proposed. In addition to optimizations and scalability they offer, the use of library primitives ease development and open up avenues for adapting the simulation for custom hardware. Composition of parallel prefix scans further improve the energy efficiency of the proposed recurrences and similar sequences of parallel prefix scans.
  • Item
    Health data mining using tensor factorization: Methods and applications
    (Georgia Institute of Technology, 2019-05-16) Perros, Ioakeim
    The increasing volume and availability of healthcare and biomedical data is opening up new opportunities for the use of computational methods to improve health. However, the data are diverse, multidimensional and sparse, posing challenges to the extraction of clinically-meaningful relations and interactions. For example, the electronic health records (EHRs) of patients contain time-stamped occurrences of diverse features (e.g., diagnoses, medications, procedures) as well as information about relationships among different types of features (e.g., identifying the subset of medications prescribed to treat a certain diagnosis). Such EHR data can be utilized to identify patient cohorts sharing common conditions without expert supervision, a task known as unsupervised phenotyping. Tensors, which are generalizations of matrices for higher orders, can naturally express the multidimensional data relationships inherent in the EHR. Tensor factorization encompasses a set of tools which can capture the latent correlation structure among diverse feature sets. For example, in the context of phenotyping, tensor factorization can be utilized to identify clinically-meaningful patient groups, along with succinct feature profiles distinguishing one group of patients from another. In this dissertation, we expose how tensor factorization can be leveraged to tackle several important problems in healthcare and biomedicine. We also identify multiple significant methodological challenges in fully harnessing the capacity of tensor factorization for the problems at hand and develop algorithms to tackle them. In particular, we focus on the following problems: - Drug-perturbed, tissue-specific gene expression prediction, where we demonstrate how tensor factorization can be used to model the interactions between drugs, genes and tissues in an efficient manner. - Unsupervised phenotyping through EHRs, in the context of which we advance existing tensor factorization methods so that: a) they are fast and scalable to use for large patient cohorts of hundreds of thousands of patients; and b) they yield interpretable output, easy to be communicated to a clinical expert. - Automating understanding of physician desktop work. Therein, we demonstrate how tensor factorization can be used to substantially compress audit EHR logs, offering an intuitive categorization of user actions that can be used for workflow analysis.
  • Item
    Long read mapping at scale: Algorithms and applications
    (Georgia Institute of Technology, 2019-04-01) Jain, Chirag
    Capability to sequence DNA has been around for four decades now, providing ample time to explore its myriad applications and the concomitant development of bioinformatics methods to support them. Nevertheless, disruptive technological changes in sequencing often upend prevailing protocols and characteristics of what can be sequenced, necessitating a new direction of development for bioinformatics algorithms and software. We are now at the cusp of the next revolution in sequencing due to the development of long and ultra-long read sequencing technologies by Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT). Long reads are attractive because they narrow the scale gap between sizes of genomes and sizes of sequenced reads, with the promise of avoiding assembly errors and repeat resolution challenges that plague short read assemblers. However, long reads themselves sport error rates in the vicinity of 10-15%, compared to the high accuracy of short reads (< 1%). There is an urgent need to develop bioinformatics methods to fully realize the potential of long-read sequencers. Mapping and alignment of reads to a reference is typically the first step in genomics applications. Though long read technologies are still evolving, research efforts in bioinformatics have already produced many alignment-based and alignment-free read mapping algorithms. Yet, much work lays ahead in designing provably efficient algorithms, formally characterizing the quality of results, and developing methods that scale to larger input datasets and growing reference databases. While the current model to represent the reference as a collection of linear genomes is still favored due to its simplicity, mapping to graph-based representations, where the graph encodes genetic variations in a human population also becomes imperative. This dissertation work is focused on provably good and scalable algorithms for mapping long reads to both linear and graph references. We make the following contributions: 1. We develop fast and approximate algorithms for end-to-end and split mapping of long reads to reference genomes. Our work is the first to demonstrate scaling to the entire NCBI database, the collection of all curated and non-redundant genomes. 2. We generalize the mapping algorithm to accelerate the related problems of computing pairwise whole-genome comparisons. We shed light on two fundamental biological questions concerning genomic duplications and delineating microbial species boundaries. 3. We provide new complexity results for aligning reads to graphs under Hamming and edit distance models to classify the problem variants for which existence of a polynomial time solution is unlikely. In contrast to prior results that assume alphabets as a function of the problem size, we prove that the problem variants that allow edits in graph remain NP-complete for even constant-sized alphabets, thereby resolving computational complexity of the problem for DNA and protein sequence to graph alignments. 4. Finally, we propose a new parallel algorithm to optimally align long reads to large variation graphs derived from human genomes. It demonstrates near linear scaling on multi-core CPUs, resulting in run-time reduction from multiple days to three hours when aligning a long read set to an MHC human variation graph.
  • Item
    Diagnosing performance bottlenecks in HPC applications
    (Georgia Institute of Technology, 2019-03-29) Czechowski, Kenneth
    The software performance optimizations process is one of the most challenging aspects of developing highly performant code because underlying performance limitations are hard to diagnose. In many cases, identifying performance bottlenecks, such as latency stalls, requires a combination of fidelity and usability that existing tools do not provide: traditional performance models and runtime analysis lack the granularity necessary to uncover low-level bottlenecks; while, architectural simulations are too cumbersome and fragile to employ as a primary source of information. To address this need, we propose a performance analysis technique, called Pressure Point Analysis (PPA), which delivers the accessibility of analytical models with the precision of a simulator. The foundation of this approach is based on an autotuning-inspired technique that dynamically perturbs binary code (e.g., inserting/deleting instructions to affect utilization of functional units, altering memory access addresses to change cache hit rate, or swapping registers to alter instruction level dependencies) to then analyze the effects various perturbations have on the overall performance. When systematically applied, a battery of carefully designed perturbations, which target specific microarchitectural features, can glean valuable insight about pressure points in the code. PPA provides actionable information about hardware-software interactions that can be used by the software developer to manually tweak the application code. In some circumstances the performance bottlenecks are unavoidable, in which case this analysis can be used to establish a rigorous performance bound for the application. In other cases, this information can identify the primary performance limitations and project potential performance improvements if these bottlenecks are mitigated.
  • Item
    Parallel and scalable combinatorial string algorithms on distributed memory systems
    (Georgia Institute of Technology, 2019-03-29) Flick, Patrick
    Methods for processing and analyzing DNA and genomic data are built upon combinatorial graph and string algorithms. The advent of high-throughput DNA sequencing is enabling the generation of billions of reads per experiment. Classical and sequential algorithms can no longer deal with these growing data sizes - which for the last 10 years have greatly out-paced advances in processor speeds. Processing and analyzing state-of-the-art genomic data sets require the design of scalable and efficient parallel algorithms and the use of large computing clusters. Suffix arrays and trees are fundamental string data structures, which lie at the foundation of many string algorithms, with important applications in text processing, information retrieval, and computational biology. Conversely, the parallel construction of these indices is an actively studied problem. However, prior approaches lacked good worst-case run-time guarantees and exhibit poor scaling and overall performance. In this work, we present our distributed-memory parallel algorithms for indexing large datasets, including algorithms for the distributed construction of suffix arrays, LCP arrays, and suffix trees. We formulate a generalized version of the All-Nearest-Smaller-Values problem, provide an optimal distributed solution, and apply it to the distributed construction of suffix trees - yielding a work-optimal parallel algorithm. Our algorithms for distributed suffix array and suffix tree construction improve the state-of-the-art by simultaneously improving worst-case run-time bounds and achieving superior practical performance. Next, we introduce a novel distributed string index, the Distributed Enhanced Suffix Array (DESA) - based on the suffix and LCP arrays, the DESA consists of these and additional distributed data structures. The DESA is designed to allow efficient pattern search queries in distributed memory while requiring at most O(n/p) memory per process. We present efficient distributed-memory parallel algorithms for querying, as well as for the efficient construction of this distributed index. Finally, we present our work on distributed-memory algorithms for clustering de Bruijn graphs and its application to solving a grand challenge metagenomic dataset.
  • Item
    Towards tighter integration of machine learning and discrete optimization
    (Georgia Institute of Technology, 2019-03-28) Khalil, Elias
    Discrete Optimization algorithms underlie intelligent decision-making in a wide variety of domains. From airline fleet scheduling to data center resource management and matching in ride-sharing services, decisions are often modeled with binary on/off variables that are subject to operational and financial constraints. Branch-and-bound algorithms, as well as heuristics, have been developed to tackle hard discrete optimization problems. Typically, the algorithm designer first identifies structural properties of the problem, then exploits them to solve it. This standard paradigm in algorithm design suffers two main limitations. On the one hand, a good algorithm may be very complex, and thus hard to design manually. On the other hand, in many real-world applications, the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. Without much trial-and-error and domain expertise, it is difficult to tailor optimization algorithms for such a distribution of instances. We show how Machine Learning (ML) can be used to overcome these limitations. In MIP branch-and-bound, we propose to use ML to devise data-driven, input-specific decisions during tree search. Our experimental results show that, for both variable selection and primal heuristic selection, ML approaches can significantly improve the performance of a solver on a variety of instance sets. For settings where optimality guarantees are not of concern, we design Deep Learning approaches for automatically deriving new heuristics that are parametrized as recurrent neural networks. These learned heuristics exploit the properties of the instance distribution, resulting in effective algorithms for various graph optimization problems and general integer programs. This dissertation establishes Machine Learning as a central component of the algorithm design process for discrete optimization, one that complements human ingenuity rather than replace it. This effort has given rise to a variety of theoretical, modeling and practical research questions in ML as it pertains to algorithm design. We also discuss the potential of discrete optimization methods in ML, particularly in the context of Adversarial Attacks on a class of widely used discrete neural networks. As ML models become more pervasive in software systems and automated decision-making, enforcing constraints on their behavior or discovering vulnerabilities therein will necessitate the development of new, scalable constraint reasoning approaches.
  • Item
    Techniques to improve genome assembly quality
    (Georgia Institute of Technology, 2019-03-28) Nihalani, Rahul
    De-novo genome assembly is an important problem in the field of genomics. Discovering and analysing genomes of different species has numerous applications. For humans, it can lead to early detection of disease traits and timely prevention of diseases like cancer. In addition, it is useful in discovering genomes of unknown species. Even though it has received enormous attention in the last couple of decades, the problem remains unsolved to a satisfactory level, as shown in various scientific studies. Paired-end sequencing is a technology that sequences pairs of short strands from a genome, called reads. The pairs of reads originate from nearby genomic locations, and are commonly used to help more accurately determine the genomic location of individual reads and resolve repeats in genome assembly. In this thesis, we describe the genome assembly problem, and the key challenges involved in solving it. We discuss related work where we describe the two most popular models to approach the problem: de-Bruijn graphs and overlap graphs, along with their pros and cons. We describe our proposed techniques to improve the quality of genome assembly. Our main contribution in this work is designing a de-Bruijn graph based assembly algorithm to effectively utilize paired reads to improve genome assembly quality. We also discuss how our algorithm tackles some of the key challenges involved in genome assembly. We adapt this algorithm to design a parallel strategy to obtain high quality assembly for large datasets such as rice within reasonable time-frame. In addition, we describe our work on probabilistically estimating overlap graphs for large short reads datasets. We discuss the results obtained for our work, and conclude with some future work.
  • Item
    Automated surface finish inspection using convolutional neural networks
    (Georgia Institute of Technology, 2019-03-25) Louhichi, Wafa
    The surface finish of a machined part has an important effect on friction, wear, and aesthetics. The surface finish became a critical quality measure since 1980s mainly due to demands from automotive industry. Visual inspection and quality control have been traditionally done by human experts. Normally, it takes a substantial amount of operators time to stop the process and compare the quality of the produced piece with a surface roughness gauge. This manual process does not guarantee a consistent quality of the surface and is subject to human error and dependent upon the subjective opinion of the expert. Current advances in image processing, computer vision, and machine learning have created a path towards an automated surface finish inspection increasing the automation level of the whole process even further than it is now. In this thesis work, we propose a deep learning approach to replicate human judgment without using a surface roughness gauge. We used a Convolutional Neural Network (CNN) to train a surface finish classifier. Because of data scarcity, we generated our own image dataset of aluminum pieces produced from turning and boring operations on a Computer Numerical Control (CNC) lathe, which consists of a total of 980 training images, 160 validation images, and 140 test images. Considering the limited dataset and the computational cost of training deep neural networks from scratch, we applied transfer learning technique to models pre-trained on the publicly available ImageNet benchmark dataset. We used PyTorch Deep Learning framework and both CPU and GPU to train ResNet18 CNN. The training on CPU took 1h21min55s with a test accuracy of 97.14% while the training on GPU took 1min47s with a test accuracy = 97.86%. We used Keras API that runs on top of TensorFlow to train a MobileNet model. The training using Colaboratory’s GPU took 1h32m14s with an accuracy of 98.57%. The deep CNN models provided surprisingly high accuracy missclassifying only a few of 140 testing images. The MobileNet model allowed to run the inference efficiently on mobile devices. The affordable and easy-to-use solution provides a viable new method of automated surface inspection systems (ASIS).