Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 13
  • Item
    Automated surface finish inspection using convolutional neural networks
    (Georgia Institute of Technology, 2019-03-25) Louhichi, Wafa
    The surface finish of a machined part has an important effect on friction, wear, and aesthetics. The surface finish became a critical quality measure since 1980s mainly due to demands from automotive industry. Visual inspection and quality control have been traditionally done by human experts. Normally, it takes a substantial amount of operators time to stop the process and compare the quality of the produced piece with a surface roughness gauge. This manual process does not guarantee a consistent quality of the surface and is subject to human error and dependent upon the subjective opinion of the expert. Current advances in image processing, computer vision, and machine learning have created a path towards an automated surface finish inspection increasing the automation level of the whole process even further than it is now. In this thesis work, we propose a deep learning approach to replicate human judgment without using a surface roughness gauge. We used a Convolutional Neural Network (CNN) to train a surface finish classifier. Because of data scarcity, we generated our own image dataset of aluminum pieces produced from turning and boring operations on a Computer Numerical Control (CNC) lathe, which consists of a total of 980 training images, 160 validation images, and 140 test images. Considering the limited dataset and the computational cost of training deep neural networks from scratch, we applied transfer learning technique to models pre-trained on the publicly available ImageNet benchmark dataset. We used PyTorch Deep Learning framework and both CPU and GPU to train ResNet18 CNN. The training on CPU took 1h21min55s with a test accuracy of 97.14% while the training on GPU took 1min47s with a test accuracy = 97.86%. We used Keras API that runs on top of TensorFlow to train a MobileNet model. The training using Colaboratory’s GPU took 1h32m14s with an accuracy of 98.57%. The deep CNN models provided surprisingly high accuracy missclassifying only a few of 140 testing images. The MobileNet model allowed to run the inference efficiently on mobile devices. The affordable and easy-to-use solution provides a viable new method of automated surface inspection systems (ASIS).
  • Item
    Brownian dynamics studies of DNA internal motions
    (Georgia Institute of Technology, 2018-12-04) Ma, Benson Jer-Tsung
    Earlier studies by Chow and Skolnick suggest that the internal motions of bacterial DNA may be governed by strong forces arising from being crowded into the small space of the nucleoid, and that these internal motions affect the diffusion of intranuclear protein through the dense matrix of the nucleoid. These findings open new questions regarding the biological consequences of DNA internal motions, and the ability of internal motions to influence protein diffusion in response to different environment factors. The results of diffusion studies of DNA based on coarse-grained simulations are presented. Here, our goals are to investigate the internal motions of DNA with respect to external factors, namely salt concentration of the solvent and intranuclear protein size, and to understand the mechanisms by which proteins dif- fuse through the dense matrix of bacterial DNA. First, a novel coarse-grained model of the DNA chain was developed and shown to maintain the fractal property of in vivo DNA. Next, diffusion studies using this model were performed through Brownian dynamics simulations. Our results suggest that DNA internal motions may be substantially affected by ion concentrations near physiological ion concentration ranges, with the diffusion activity increasing to a limit with increases in ion concentration. Furthermore, it was found that, for a fixed protein volume fraction, the motions of proteins in a DNA-protein system are substantially affected by the size of the proteins, with the diffusion activity increasing to a limit with decreasing protein radii, but the internal motions of DNA within the same system do not appear to change with changes to protein sizes.
  • Item
    Cost benefit analysis of adding technologies to commercial aircraft to increase the survivability against surface to air threats
    (Georgia Institute of Technology, 2018-07-27) Patterson, Anthony
    Flying internationally is an integral part of people's everyday lives. Most United States airlines fly internationally on a daily basis. The world continues to become a more dangerous place, due to improvements to technology and the willingness of some nations to sell older technology to rebel groups. In the military realm, there have been countermeasures to combat surface to air threats and thus increase the survivability of military aircraft. Survivability is defined as the ability to remain mission capable after a single engagement. Existing commercial aircraft currently do not have any countermeasure systems or missile warning systems integrated into their onboard systems. Better understanding of the interaction between countermeasure systems and commercial aircraft will help bring additional knowledge to support a cost benefit analysis. The scope of this research is to perform a cost benefit analysis on the addition of these technologies that are currently available on military aircraft, and to study the adding of these same technologies to commercial aircraft. The research will include a cost benefit analysis along with a size, weight, and power analysis. Additionally, a simulation will be included that will analyze the success rates of different countermeasures versus different surface to air threats in hopes of bridging the gap between a cost benefit analysis and a survivability simulation. The research will explore whether or not adding countermeasure systems to commercial aircraft is technically feasible and economically viable.
  • Item
    Optimizing computational kernels in quantum chemistry
    (Georgia Institute of Technology, 2018-05-01) Schieber, Matthew Cole
    Density fitting is a rank reduction technique popularly used in quantum chemistry in order to reduce the computational cost of evaluating, transforming, and processing the 4-center electron repulsion integrals (ERIs). By utilizing the resolution of the identity technique, density fitting reduces the 4-center ERIs into a 3-center form. Doing so not only alleviates the high storage cost of the ERIs, but it also reduces the computational cost of operations involving them. Still, these operations can remain as computational bottlenecks which commonly plague quantum chemistry procedures. The goal of this thesis is to investigate various optimizations for density-fitted version of computational kernels used ubiquitously throughout quantum chemistry. First, we detail the spatial sparsity available to the 3-center integrals and the application of such sparsity to various operations, including integral computation, metric contractions, and integral transformations. Next, we investigate sparse memory layouts and their implication on the performance of the integral transformation kernel. Next, we analyze two transformation algorithms and how their performance will vary depending on the context in which they are used. Then, we propose two sparse memory layouts and the resulting performance of Coulomb and exchange evaluations. Since the memory required for these tensors grows rapidly, we frame these discussions in the context of their in-core and disk performance. We implement these methods in the P SI 4 electronic structure package and reveal the optimal algorithm for the kernel should vary depending on whether a disk-based implementation must be used.
  • Item
    Parallel simulation of scale-free networks
    (Georgia Institute of Technology, 2017-08-01) Nguyen, Thuy Vy Thuy
    It has been observed that many networks arising in practice have skewed node degree distributions. Scale-free networks are one well-known class of such networks. Achieving efficient parallel simulation of scale-free networks is challenging because large-degree nodes can create bottlenecks that limit performance. To help address this problem, we describe an approach called link partitioning where each network link is mapped to a logical process in contrast to the conventional approach of mapping each node to a logical process.
  • Item
    Geometric feature extraction in support of the single digital thread approach to detailed design
    (Georgia Institute of Technology, 2016-12-08) Gharbi, Aroua
    Aircraft design is a multi-disciplinary and complicated process that takes a long time and requires a large number of trade-offs between customer requirements, various types of constraints and market competition. Particularly detailed design is the phase that takes most of the time due to the high number of iterations between the component design and the structural analysis that need to be run before reaching an optimal design. In this thesis, an innovative approach for detailed design is suggested. It promotes a collaborative framework in which knowledge from the small scale level of components is shared and transferred to the subsystems and systems level leading to more robust and real time decisions that speed up the design time. This approach is called the Single Digital Thread Approach to Detailed Design or shortly STAnDD. The implementation of this approach is laid over a bottom-up plan, starting from the component level up to the aircraft level. In the component level and from a detailed design perspective, three major operations need to be executed in order to deploy the Single Digital Thread approach. The first one is the automatic geometric extraction of component features from a solid with no design history, the second phase is building an optimizer around the design and analysis iterations and the third one is the automatic update of the solid. This thesis suggests a methodology to implement the first phase. Extracting geometric features automatically from a solid with no history(also called dumb solid) is not an easy process especially in aircraft industry where most of the components have very complex shapes. Innovative techniques from Machine Learning were used allowing a consistent and robust extraction of the data.
  • Item
    Simulations of binary black holes in scalar field cosmologies
    (Georgia Institute of Technology, 2016-08-01) Tallaksen, Katharine Christina
    Numerical relativity allows us to solve Einstein's equations and study astrophysical phenomena we may not be able to observe directly, such as the very early universe. In this work, we examine the effect of scalar field cosmologies on binary black hole systems. These scalar field cosmologies were studied using cosmological bubbles, spherically symmetric structures that may have powered inflationary phase transitions. The Einstein Toolkit and Maya, developed at Georgia Tech, were used to simulate these systems. Systems studied include cosmological bubbles, binary black holes in vacuum, and binary black holes embedded within cosmological bubbles. Differences in mass accretion, merger trajectories, and characteristic gravitational waveforms will be presented for these systems. In the future, analyzing the parameter space of these waveforms may present a method to discover a gravitational wave signature characteristic to these systems and possibly detectable by the Laser Interferometer Gravitational-Wave Observatory.
  • Item
    Agglomerative clustering for community detection in dynamic graphs
    (Georgia Institute of Technology, 2016-05-10) Godbole, Pushkar J.
    Agglomerative Clustering techniques work by recursively merging graph vertices into communities, to maximize a clustering quality metric. The metric of Modularity coined by Newman and Girvan, measures the cluster quality based on the premise that, a cluster has collections of vertices more strongly connected internally than would occur from random chance. Various fast and efficient algorithms for community detection based on modularity maximization have been developed for static graphs. However, since many (contemporary) networks are not static but rather evolve over time, the static approaches are rendered inappropriate for clustering of dynamic graphs. Modularity optimization in changing graphs is a relatively new field that entails the need to develop efficient algorithms for detection and maintenance of a community structure while minimizing the “Size of change” and computational effort. The objective of this work was to develop an efficient dynamic agglomerative clustering algorithm that attempts to maximize modularity while minimizing the “size of change” in the transitioning community structure. First we briefly discuss the previous memoryless dynamic reagglomeration approach with localized vertex freeing and illustrate its performance and limitations. Then we describe the new backtracking algorithm followed by its performance results and observations. In experimental analysis of both typical and pathological cases, we evaluate and justify various backtracking and agglomeration strategies in context of the graph structure and incoming stream topologies. Evaluation of the algorithm on social network datasets, including Facebook (SNAP) and PGP Giant Component networks shows significantly improved performance over its conventional static counterpart in terms of execution time, Modularity and Size of Change.
  • Item
    Implementation and analysis of a parallel vertex-centered finite element segmental refinement multigrid solver
    (Georgia Institute of Technology, 2016-04-28) Henneking, Stefan
    In a parallel vertex-centered finite element multigrid solver, segmental refinement can be used to avoid all inter-process communication on the fine grids. While domain decomposition methods generally require coupled subdomain processing for the numerical solution to a nonlinear elliptic boundary value problem, segmental refinement exploits that subdomains are almost decoupled with respect to high-frequency error components. This allows to perform multigrid with fully decoupled subdomains on the fine grids, which was proposed as a sequential low-storage algorithm by Brandt in the 1970s, and as a parallel algorithm by Brandt and Diskin in 1994. Adams published the first numerical results from a multilevel segmental refinement solver in 2014, confirming the asymptotic exactness of the scheme for a cell-centered finite volume implementation. We continue Brandt’s and Adams’ research by experimentally investigating the scheme’s accuracy with a vertex-centered finite element segmental refinement solver. We confirm that full multigrid accuracy can be preserved for a few segmental refinement levels, although we observe a different dependency on the segmental refinement parameter space. We show that various strategies for the grid transfers between the finest conventional multigrid level and the segmental refinement subdomains affect the solver accuracy. Scaling results are reported for a Cray XC30 with up to 4096 cores.
  • Item
    A framework for automated management of exploit testing environments
    (Georgia Institute of Technology, 2015-12-07) Flansburg, Kevin
    To demonstrate working exploits or vulnerabilities, people often share their findings as a form of proof-of-concept (PoC) prototype. Such practices are particularly useful to learn about real vulnerabilities and state-of-the-art exploitation techniques. Unfortunately, the shared PoC exploits are seldom reproducible; in part because they are often not thoroughly tested, but largely because authors lack a formal way to specify the tested environment or its dependencies. Although exploit writers attempt to overcome such problems by describing their dependencies or testing environments using comments, this informal way of sharing PoC exploits makes it hard for exploit authors to achieve the original goal of demonstration. More seriously, these non- or hard-to-reproduce PoC exploits have limited potential to be utilized for other useful research purposes such as penetration testing, or in benchmark suites to evaluate defense mechanisms. In this paper, we present XShop, a framework and infrastructure to describe environments and dependencies for exploits in a formal way, and to automatically resolve these constraints and construct an isolated environment for development, testing, and to share with the community. We show how XShop's flexible design enables new possibilities for utilizing these reproducible exploits in five practical use cases: as a security benchmark suite, in pen-testing, for large scale vulnerability analysis, as a shared development environment, and for regression testing. We design and implement such applications by extending the XShop framework and demonstrate its effectiveness with twelve real exploits against well-known bugs that include GHOST, Shellshock, and Heartbleed. We believe that the proposed practice not only brings immediate incentives to exploit authors but also has the potential to be grown as a community-wide knowledge base.