Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 16
  • Item
    An ensemble-based data assimilation approach to the simulation and reconstruction of chaotic cardiac states
    (Georgia Institute of Technology, 2023-05-02) Badr, Shoale
    Complexities in time-dependent real-world systems pose several difficulties when forecasting their future dynamics. Advancements in the field of meteorology, with the purpose of improving weather forecasting (which behaves chaotically), over the last few decades have led to the development of data assimilation, which is a technique that combines predictive numerical mathematical models with real measurements, or observations, to form more refined estimates of system states over time. As reconstruction of chaos in the tissue of the heart presents a similar forecasting problem, we apply data assimilation to the cardiac domain in this thesis. Within the assimilation algorithm, we use three widely-known mathematical cardiac models tuned to produce specific types of complex cardiac electrical dynamics, including stable spiral waves and spiral wave breakup, corresponding to tachy- cardia and fibrillation, respectively. We generate synthetic observations from each model by subsampling their solutions in space and time and restricting utilizing only one variable representing voltage, then adding Gaussian noise, and use the resulting datasets to test our implementation. By leveraging the public availability of data assimilation filtering algorithms (primarily Kalman filters) through the Parallel Data Assimilation Framework (PDAF) and adding extensions necessary for the cardiac setting, we present how two- dimensional chaotic electro-cardiac voltage behavior can be reconstructed with ensemble-based data assimilation in the presence of several experimental conditions including noise, sparse observations, and model error. This thesis presents the first application, to our knowledge, of ensemble Kalman filtering to the reconstruction of complex cardiac electrical dynamics in the 2-D domain. We found that the Error Subspace Transform Kalman Filter (ESTKF) we used is sensitive to model error and the frequency at which states are assimilated (assimilation interval). We also propose several possible improvements that can be made to our assimilation system so that it may improve state reconstruction accuracy. These preliminary findings suggest promising future experimental results, both using synthetic observations (with different model dynamics initialization) and with true experimental data.
  • Item
    NEWS DATA VISUALIZATION INTERFACE DEVELOPMENT USING NMF ALGORITHM
    (Georgia Institute of Technology, 2022-05-03) Ahn, Byeongsoo
    News data is a super large-scale dataset. It covers a wide range of topics ranging from heavy topics such as politics and society to beauty and entertainment, relatively light topics. At the same time, it is also the most accessible source of information for the general public to obtain information. Thus, how is this large amount of data used by the general public being utilized? Currently, services provided by news platforms are just full article searches and related news recommendations. It uses only a fraction of the vast news dataset, and there is still a lack of systems to fully utilize and analyze it. As mentioned above, news datasets which contain a wide range of topics and super large scales of data, record everything that happened in the past and present, so analyzing and visualizing them can track how trends in real-world change over time and even discover what the topics of the large dataset are without reading the full text through topic modeling. For this objective, in this thesis, we propose a novel interactive visualization interface for the news data based on NMF to analyze, visualize, and utilize datasets more practically than simply searching the articles. Through this thesis, We first show the superior topic modeling performance of the NMF algorithm and the superior processing speed that can be used for interactive visual interface compared to other methods and then suggest the visual interface that contains various features to help users better analyze and intuitively understand the data. Finally, we present use cases on how this study can be used practically and present their applicability in various fields
  • Item
    ROBUST COUNTERFACTUAL LEARNING FOR CLINICAL DECISION-MAKING USING ELECTRONIC HEALTH RECORDS
    (Georgia Institute of Technology, 2020-12-07) Choudhary, Anirudh
    Building clinical decision support systems, which includes diagnosing patient’s disease states and formulating a treatment plan, is an important step toward personalized medicine. The counterfactual nature of clinical decision-making is a major challenge for machine learning-based treatment recommendation, i.e., we can only observe the outcome of the clinician’s actions while the outcome of alternative treatment options is unknown. The thesis is an attempt to formulate robust counterfactual learning frameworks for efficient offline policy evaluation and policy learning using observational data. We focus on the offline data scenario and leverage historically collected Electronic Health Records, since online policy testing can potentially adversely impact the patient’s well-being. The problem is compounded by the inherent uncertainty in clinical decision-making due to heterogeneous patient contexts, the presence of significant variability in patient-specific predictions, smaller datasets, and limited knowledge of the clinician’s intrinsic reward function and environment dynamics. This motivates the need to tackle uncertainty and enable improved clinical policy generalization via context-based policy learning. We propose counterfactual frameworks to tackle the highlighted challenges under two learning scenarios: contextual bandits and dynamic treatment regime. In the bandit setting, we focus on effectively tackling the model uncertainty inherent in inverse propensity weighting methods and highlight our approach’s efficacy on oral anticoagulant dosing task. In dynamic treatment regime, we focus on sequential treatment interventions and consider the problem of imitating the clinician’s policy for sepsis management. We formulate it as a multi-task problem and propose meta-Inverse Reinforcement Learning framework to jointly adapt policy and reward functions to diverse patient groups, thus enabling improved policy generalization.
  • Item
    Learning from Multi-Source Weak Supervision for Neural Text Classification
    (Georgia Institute of Technology, 2020-07-28) Ren, Wendi
    Text classification is a fundamental text mining task with numerous real-life applications. While deep neural nets have achieved superior performance for text classification, they rely on large-scale labeled data to achieve strong performance. Obtaining large-scale labeled data, however, can be prohibitively expensive in many applications. In this project, we study the problem of learning neural text classifiers without using any labeled data, but only easy-to-provide heuristic rules as weak supervision. This problem is challenging because rule-induced weak labels are often noisy and incomplete. To address these challenges, we propose a model that can be learned from multiple weak supervision sources with two key components. The first component is a rule denoiser, which estimates conditional source reliability using a soft attention mechanism and reduces label noise by aggregating rule- induced noisy data. The second is a neural classifier that predicts soft labels for unmatchable samples to address the rule coverage issue. The two components are integrated into a co-training framework, which can be trained end-to-end to mutually enhance each other. We evaluate our model on five benchmarks for four popular text classification tasks, including sentiment analysis, topic classification, spam classification, and relation extraction. The results show that our model outperforms state-of-the-art weakly-supervised and semi-supervised methods, and achieves comparable performance with fully-supervised methods even without any labeled data.
  • Item
    Automated surface finish inspection using convolutional neural networks
    (Georgia Institute of Technology, 2019-03-25) Louhichi, Wafa
    The surface finish of a machined part has an important effect on friction, wear, and aesthetics. The surface finish became a critical quality measure since 1980s mainly due to demands from automotive industry. Visual inspection and quality control have been traditionally done by human experts. Normally, it takes a substantial amount of operators time to stop the process and compare the quality of the produced piece with a surface roughness gauge. This manual process does not guarantee a consistent quality of the surface and is subject to human error and dependent upon the subjective opinion of the expert. Current advances in image processing, computer vision, and machine learning have created a path towards an automated surface finish inspection increasing the automation level of the whole process even further than it is now. In this thesis work, we propose a deep learning approach to replicate human judgment without using a surface roughness gauge. We used a Convolutional Neural Network (CNN) to train a surface finish classifier. Because of data scarcity, we generated our own image dataset of aluminum pieces produced from turning and boring operations on a Computer Numerical Control (CNC) lathe, which consists of a total of 980 training images, 160 validation images, and 140 test images. Considering the limited dataset and the computational cost of training deep neural networks from scratch, we applied transfer learning technique to models pre-trained on the publicly available ImageNet benchmark dataset. We used PyTorch Deep Learning framework and both CPU and GPU to train ResNet18 CNN. The training on CPU took 1h21min55s with a test accuracy of 97.14% while the training on GPU took 1min47s with a test accuracy = 97.86%. We used Keras API that runs on top of TensorFlow to train a MobileNet model. The training using Colaboratory’s GPU took 1h32m14s with an accuracy of 98.57%. The deep CNN models provided surprisingly high accuracy missclassifying only a few of 140 testing images. The MobileNet model allowed to run the inference efficiently on mobile devices. The affordable and easy-to-use solution provides a viable new method of automated surface inspection systems (ASIS).
  • Item
    Brownian dynamics studies of DNA internal motions
    (Georgia Institute of Technology, 2018-12-04) Ma, Benson Jer-Tsung
    Earlier studies by Chow and Skolnick suggest that the internal motions of bacterial DNA may be governed by strong forces arising from being crowded into the small space of the nucleoid, and that these internal motions affect the diffusion of intranuclear protein through the dense matrix of the nucleoid. These findings open new questions regarding the biological consequences of DNA internal motions, and the ability of internal motions to influence protein diffusion in response to different environment factors. The results of diffusion studies of DNA based on coarse-grained simulations are presented. Here, our goals are to investigate the internal motions of DNA with respect to external factors, namely salt concentration of the solvent and intranuclear protein size, and to understand the mechanisms by which proteins dif- fuse through the dense matrix of bacterial DNA. First, a novel coarse-grained model of the DNA chain was developed and shown to maintain the fractal property of in vivo DNA. Next, diffusion studies using this model were performed through Brownian dynamics simulations. Our results suggest that DNA internal motions may be substantially affected by ion concentrations near physiological ion concentration ranges, with the diffusion activity increasing to a limit with increases in ion concentration. Furthermore, it was found that, for a fixed protein volume fraction, the motions of proteins in a DNA-protein system are substantially affected by the size of the proteins, with the diffusion activity increasing to a limit with decreasing protein radii, but the internal motions of DNA within the same system do not appear to change with changes to protein sizes.
  • Item
    Cost benefit analysis of adding technologies to commercial aircraft to increase the survivability against surface to air threats
    (Georgia Institute of Technology, 2018-07-27) Patterson, Anthony
    Flying internationally is an integral part of people's everyday lives. Most United States airlines fly internationally on a daily basis. The world continues to become a more dangerous place, due to improvements to technology and the willingness of some nations to sell older technology to rebel groups. In the military realm, there have been countermeasures to combat surface to air threats and thus increase the survivability of military aircraft. Survivability is defined as the ability to remain mission capable after a single engagement. Existing commercial aircraft currently do not have any countermeasure systems or missile warning systems integrated into their onboard systems. Better understanding of the interaction between countermeasure systems and commercial aircraft will help bring additional knowledge to support a cost benefit analysis. The scope of this research is to perform a cost benefit analysis on the addition of these technologies that are currently available on military aircraft, and to study the adding of these same technologies to commercial aircraft. The research will include a cost benefit analysis along with a size, weight, and power analysis. Additionally, a simulation will be included that will analyze the success rates of different countermeasures versus different surface to air threats in hopes of bridging the gap between a cost benefit analysis and a survivability simulation. The research will explore whether or not adding countermeasure systems to commercial aircraft is technically feasible and economically viable.
  • Item
    Optimizing computational kernels in quantum chemistry
    (Georgia Institute of Technology, 2018-05-01) Schieber, Matthew Cole
    Density fitting is a rank reduction technique popularly used in quantum chemistry in order to reduce the computational cost of evaluating, transforming, and processing the 4-center electron repulsion integrals (ERIs). By utilizing the resolution of the identity technique, density fitting reduces the 4-center ERIs into a 3-center form. Doing so not only alleviates the high storage cost of the ERIs, but it also reduces the computational cost of operations involving them. Still, these operations can remain as computational bottlenecks which commonly plague quantum chemistry procedures. The goal of this thesis is to investigate various optimizations for density-fitted version of computational kernels used ubiquitously throughout quantum chemistry. First, we detail the spatial sparsity available to the 3-center integrals and the application of such sparsity to various operations, including integral computation, metric contractions, and integral transformations. Next, we investigate sparse memory layouts and their implication on the performance of the integral transformation kernel. Next, we analyze two transformation algorithms and how their performance will vary depending on the context in which they are used. Then, we propose two sparse memory layouts and the resulting performance of Coulomb and exchange evaluations. Since the memory required for these tensors grows rapidly, we frame these discussions in the context of their in-core and disk performance. We implement these methods in the P SI 4 electronic structure package and reveal the optimal algorithm for the kernel should vary depending on whether a disk-based implementation must be used.
  • Item
    Parallel simulation of scale-free networks
    (Georgia Institute of Technology, 2017-08-01) Nguyen, Thuy Vy Thuy
    It has been observed that many networks arising in practice have skewed node degree distributions. Scale-free networks are one well-known class of such networks. Achieving efficient parallel simulation of scale-free networks is challenging because large-degree nodes can create bottlenecks that limit performance. To help address this problem, we describe an approach called link partitioning where each network link is mapped to a logical process in contrast to the conventional approach of mapping each node to a logical process.
  • Item
    Simulations of binary black holes in scalar field cosmologies
    (Georgia Institute of Technology, 2016-08-01) Tallaksen, Katharine Christina
    Numerical relativity allows us to solve Einstein's equations and study astrophysical phenomena we may not be able to observe directly, such as the very early universe. In this work, we examine the effect of scalar field cosmologies on binary black hole systems. These scalar field cosmologies were studied using cosmological bubbles, spherically symmetric structures that may have powered inflationary phase transitions. The Einstein Toolkit and Maya, developed at Georgia Tech, were used to simulate these systems. Systems studied include cosmological bubbles, binary black holes in vacuum, and binary black holes embedded within cosmological bubbles. Differences in mass accretion, merger trajectories, and characteristic gravitational waveforms will be presented for these systems. In the future, analyzing the parameter space of these waveforms may present a method to discover a gravitational wave signature characteristic to these systems and possibly detectable by the Laser Interferometer Gravitational-Wave Observatory.