Vempala, Santosh S.

Associated Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Life (and routing) on the Wireless Manifold
    (Georgia Institute of Technology, 2007) Kanade, Varun ; Vempala, Santosh S. ; Georgia Institute of Technology. College of Computing
    We present the wireless manifold, a 2-dimensional surface in 3-dimensional space with the property that geodesic distances accurately capture wireless signal strengths. A compact representation of the manifold can be reconstructed from a sparse set of signal measurements. The manifold distance suggests a simple routing algorithm that avoids obstacles, naturally handles mobile nodes without explicitly maintaining the connectivity graph and is more efficient compared to using Euclidean distance as measured by success rate, routing load and failure tolerance. Placing sensors to cover the manifold is more effective than covering the underlying physical space.
  • Item
    A Computer Science View of the Brain
    (Georgia Institute of Technology, 2017-03-15) Vempala, Santosh S. ; Georgia Institute of Technology. Algorithms & Randomness Center ; Georgia Institute of Technology. Neural Engineering Center
    Computational perspectives on scientific phenomena have often proven to be remarkably insightful. Rapid advances in computational neuroscience, and the resulting plethora of data and models highlight the lack of an overarching theory for how the brain accomplishes perception and cognition (the mind). Taking the view that the answer must surely have a computational component, we present a few approachable questions for computer scientists, along with some recent work (with Christos Papadimitriou, Samantha Petti and Wolfgang Maass) on mechanisms for the formation of memories, the creation of associations between memories and the benefits of such associations.
  • Item
    The Joy of PCA
    (Georgia Institute of Technology, 2010-09-17) Vempala, Santosh S. ; Georgia Institute of Technology. School of Computational Science and Engineering
    Principal Component Analysis is the most widely used technique for high-dimensional or large data. For typical applications (nearest neighbor, clustering, learning), it is not hard to build examples on which PCA "fails." Yet, it is popular and successful across a variety of data-rich areas. In this talk, we focus on two algorithmic problems where the performance of PCA is provably near-optimal, and no other method is known to have similar guarantees. The problems we consider are (a) the classical statistical problem of unraveling a sample from a mixture of k unknown Gaussians and (b) the classic learning theory problem of learning an intersection of k halfspaces. During the talk, we will encounter recent extensions of PCA that are noise-resistant, affine-invariant and nonviolent.
  • Item
    Professor Debate on the Topic - Do We Live In a Simulation?
    (Georgia Institute of Technology, 2019-11-12) Cvitanović, Predrag ; Holder, Mary ; Klein, Hans ; Rocklin, D. Zeb ; Turk, Gregory ; Vempala, Santosh S. ; Georgia Institute of Technology. School of Physics ; Georgia Institute of Technology. Center for Nonlinear Science ; Georgia Institute of Technology. School of Psychology ; Georgia Institute of Technology. School of Public Policy ; Georgia Institute of Technology. School of Interactive Computing ; Georgia Institute of Technology. School of Computer Science
    Do we live in a simulation? The School of Physics and the Society of Physics Students will host a public debate between faculty from the College of Science and the College of Computing to answer this question. This event is free and open to the all. There will be time at the conclusion of the debate for audience members to direct questions towards the faculty panel.
  • Item
    Emergent Computation and Learning from Assemblies of Neurons
    ( 2022-09-19) Vempala, Santosh S. ; Georgia Institute of Technology. Neural Engineering Center ; Georgia Institute of Technology. College of Computing
    Despite breathtaking advances in ML, and in our understanding of the brain at the level of neurons, synapses, and neural circuits, we lack a satisfactory explanation for the brain's performance in perception, cognition, language and behavior; as Nobel laureate Richard Axel put it, ``we do not have a logic for the transformation of neural activity into thought and action''. The Assembly Calculus (AC) is a framework to fill this gap, a computational model whose basic data type is the neural assembly, a large subset of neurons whose simultaneous excitation is tantamount to the subject's thinking of an object, idea, episode, or word. The AC provides a repertoire of operations ("project", "reciprocal-project", "associate", "pattern-complete", etc.) whose implementation relies only on Hebbian plasticity and inhibition, and encompasses a complete computational system. It has been shown, rigorously and in simulation, that the AC can learn to classify samples from well-separated classes. For basic concept classes in high dimension, an assembly can be formed and recalled for each class, and these assemblies are distinguishable as long as the input classes are sufficiently separated. Viewed as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision — all attributes expected of a brain-like mechanism. This talk will describe these and more recent developments for learning and computing with sequences. It will highlight several fascinating questions that arise, from random models of the connectome, to the convergence of assemblies, to their unexpected generalization abilities, to capturing the brain's ease with language. This is based on joint work with Christos Papadimitriou, Max Dabagia, Mirabel Reid, and Dan Mitropolsky. ​