Organizational Unit:
School of Computational Science and Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 17
  • Item
    Computational methods for nonlinear dimension reduction
    (Georgia Institute of Technology, 2010-11-30) Zha, Hongyuan ; Park, Haesun
  • Item
    Sequences of Problems, Matrices, and Solutions
    (Georgia Institute of Technology, 2010-11-12) De Sturler, Eric
    In a wide range of applications, we deal with long sequences of slowly changing matrices or large collections of related matrices and corresponding linear algebra problems. Such applications range from the optimal design of structures to acoustics and other parameterized systems, to inverse and parameter estimation problems in tomography and systems biology, to parameterization problems in computer graphics, and to the electronic structure of condensed matter. In many cases, we can reduce the total runtime significantly by taking into account how the problem changes and recycling judiciously selected results from previous computations. In this presentation, I will focus on solving linear systems, which is often the basis of other algorithms. I will introduce the basics of linear solvers and discuss relevant theory for the fast solution of sequences or collections of linear systems. I will demonstrate the results on several applications and discuss future research directions.
  • Item
    High-performance computing for massive graph analysis
    (Georgia Institute of Technology, 2010-10-30) Bader, David A.
  • Item
    Metanumerical computing for partial differential equations: the Sundance project
    (Georgia Institute of Technology, 2010-10-29) Kirby, Robert C.
    Metanumerical computing deals with computer programs that use abstract mathematical structure to manipulate, generate, and/or optimize compute-intensive numerical codes. This idea has gained popularity over the last decade in several areas of scientific computing, include numerical linear algebra, signal processing, and partial differential equations. The Sundance project is such an example, using high-level software-based differentiation of variational forms to automatically produce high-performance finite element implementations, all within a C++ library. In addition to automating the discretization of PDE by finite elements, recent work is demonstrating how to produce block-structured matrices and streamline the implementation of advanced numerical methods. I will conclude with some examples of this for some incompressible flow problems.
  • Item
    Gravity's Strongest Grip: A Computational Challenge
    (Georgia Institute of Technology, 2010-10-22) Shoemaker, Deirdre
    Gravitational physics is entering a new era driven by observation that will begin once gravitational-wave interferometers make their first detections. In the universe, gravitational waves are produced during violent events such as the merger of two black holes. The detection of these waves, sometimes called ripples in the fabric of spacetime, is a formidable undertaking, requiring innovative engineering, powerful data analysis tools and careful theoretical modeling. High performance computing plays a vital role in our ability to predict and interpret gravitational waves with theoretical modeling of the sources. I will provide an overview of the high performance and data analysis challenges we face in making the first and subsequent detection of gravitational waves.
  • Item
    Novel Applications of Graph Embedding Techniques
    (Georgia Institute of Technology, 2010-10-01) Bhowmick, Sanjukta
    Force-directed graph embedding algorithms, like the Fruchterman-Reingold method, are typically used to generate aesthetically pleasing graph layouts. At a fundamental level, these algorithms are based on manipulating the structural properties of the graph to match them to certain spatial requirements. This relation between structural and spatial properties is also present in other areas beyond graph visualization. In this talk, I will discuss how graph embedding can be used in diverse areas such as (i) improving the accuracy of unsupervised clustering, (ii) creating good quality elements in unstructured meshes and (iii) identifying perturbations in large-scale networks.
  • Item
    Algorithms and software with turnable parallelism
    (Georgia Institute of Technology, 2010-09-30) Vuduc, Richard
  • Item
    The Joy of PCA
    (Georgia Institute of Technology, 2010-09-17) Vempala, Santosh S.
    Principal Component Analysis is the most widely used technique for high-dimensional or large data. For typical applications (nearest neighbor, clustering, learning), it is not hard to build examples on which PCA "fails." Yet, it is popular and successful across a variety of data-rich areas. In this talk, we focus on two algorithmic problems where the performance of PCA is provably near-optimal, and no other method is known to have similar guarantees. The problems we consider are (a) the classical statistical problem of unraveling a sample from a mixture of k unknown Gaussians and (b) the classic learning theory problem of learning an intersection of k halfspaces. During the talk, we will encounter recent extensions of PCA that are noise-resistant, affine-invariant and nonviolent.
  • Item
    Composite Objective Optimization and Learning for Massive Datasets
    (Georgia Institute of Technology, 2010-09-03) Singer, Yoram
    Composite objective optimization is concerned with the problem of minimizing a two-term objective function which consists of an empirical loss function and a regularization function. Application with massive datasets often employ a regularization term which is non-differentiable or structured, such as L1 or mixed-norm regularization. Such regularizers promote sparse solutions and special structure of the parameters of the problem, which is a desirable goal for datasets of extremely high-dimensions. In this talk, we discuss several recently developed methods for performing composite objective minimization in the online learning and stochastic optimization settings. We start with a description of extensions of the well-known forward-backward splitting method to stochastic objectives. We then generalize this paradigm to the family of mirrordescent algorithms. Our work builds on recent work which connects proximal minimization to online and stochastic optimization. We focus in the algorithmic part on a new approach, called AdaGrad, in which the proximal function is adapted throughout the course of the algorithm in a data-dependent manner. This temporal adaptation metaphorically allows us to find needles in haystacks as the algorithm is able to single out very predictive yet rarely observed features. We conclude with several experiments on large-scale datasets that demonstrate the merits of composite objective optimization and underscore superior performance of various instantiations of AdaGrad.
  • Item
    Domain knowledge, uncertainty, and parameter constraints
    (Georgia Institute of Technology, 2010-08-24) Mao, Yi