Organizational Unit:
Center for Research into Novel Computing Hierarchies (CRNCH)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 10
  • Item
    The Center for Health Analytics and Informatics (CHAI)- Observational Health Data Analytics at Scale
    (Georgia Institute of Technology, 2018-11-02) Poovey, Jason A.
    An all-day summit featuring plenary talks on the future of computing and panel discussions.
  • Item
    Emergent Computation in Active Matter
    (Georgia Institute of Technology, 2018-11-02) Randall, Dana
    Swarm robotics and programmable active matter both explore how individual agents can come together to perform useful tasks collectively. How effectively they can do this depends on the individuals' computational capabilities, the size of their memory, how easily they can communicate, and on the complexity of the task. We will look at this question through the lens of distributed algorithms and stochastic processes to understand how and when individual agents with limited resources can collectively accomplish tasks that are greater than the sums of their parts.
  • Item
    Migratory Memory-Side Processing Breakthrough Architecture for Graph Analytics
    (Georgia Institute of Technology, 2018-11-02) Kogge, Peter
    Today's data intensive applications, such as sparse-matrix linear algebra and graph analytics, do not exhibit the same locality traits as compute-intensive applications, resulting in the latency of individual memory accesses overwhelming the advantages of deeply pipe-lined fast cores. The Emu Migratory Memory-Side Processing architecture provides a highly efficient, fine-grained memory system and migrating threads that move the thread state as new memory locations are accessed, without explicit program directives. The "put-only" communication model dramatically reduces thread latency and total network bandwidth load as return trips and cache coherency are eliminated. Working with the CRNCH team, Emu shares results that validate the viability of the architecture, presents a roadmap for scalability and discusses how the architecture delivers orders of magnitude reduction in data movement, inter-process communication and energy requirements. The talk touches on the familiar programming model selected for the architecture which makes it accessible to programmers and data scientists, and reveals upcoming areas of joint research with Georgia Tech in the area of Migratory Threads.
  • Item
    Architecture and Compiler Support for Near-Term Quantum Computers
    (Georgia Institute of Technology, 2018-11-02) Qureshi, Moinuddin K.
    An all-day summit featuring plenary talks on the future of computing and panel discussions.
  • Item
    Reconfigurable and Programmable Physical Computing for a Digital Computing World
    (Georgia Institute of Technology, 2018-11-02) Hasler, Jennifer
    An all-day summit featuring plenary talks on the future of computing and panel discussions.
  • Item
    Hybrid Machine Learning for Complex Systems: Algorithm and Architecture to Couple Model-Based and Data-Driven Learning
    (Georgia Institute of Technology, 2018-11-02) Mukhopadhyay, Saibal
    An all-day summit featuring plenary talks on the future of computing and panel discussions.
  • Item
    DNN-Dataflow- Hardware Co-Design for Enabling Pervasive General-Purpose AI
    (Georgia Institute of Technology, 2018-11-02) Krishna, Tushar
    The development of supervised learning based DL solutions today is mostly open loop. A typical DL model is created by hand-tuning the neural network (NN) topology by a team of experts over multiple iterations, often by trial and error, and then trained over gargantuan amounts of labeled data over weeks at a time to obtain a set of weights. The trained model hence obtained is then deployed in the cloud or at the edge over inference accelerators (such as GPUs, FPGAs, or ASICs). This form of DL breaks in the absence of labelled data, and/or if the model for the task at hand is unknown, and/or if the problem keeps changing. An AI system for continuous learning needs to have the ability to constantly interact with the environment and add and remove connections within the NN autonomously, just like our brains do. In this talk, we will briefly present our research efforts towards enabling general-purpose AI. First, we will present GeneSys, a HW-SW prototype of an Evolutionary Algorithm (EA)-based learning system, that comprises of a closed loop learning engine called EvE and an inference engine called ADAM. EvE is a genetic algorithm accelerator that can "evolve" the topology and weights of NNs completely in hardware for the task at hand, without requiring hand-optimization or back propogation training. ADAM continuously interacts with the environment and is optimized for efficiently running the irregular NNs generated by EvE, which today's suite of DL accelerators and GPUs are not optimized to handle. Next, we focus on the challenge of mapping a DNN model (developed via supervised or EA-based methods) efficiently over an accelerator (ASIC/GPU/FPGA). DNNs are essentially multi-dimensional loops, with millions of parameters and billions of computations. They can be partitioned in myriad ways to map over the compute array. Each unique mapping, or "dataflow" provides different trade-offs in terms of throughput and energy-efficiency, as it determines overall utilization and data reuse. Moreover, the right dataflow for a DNN depends heavily on the layer type, input activation to weight ratio, the accelerator microarchitecture, and its memory hierarchy. We will present an analytical tool called MAESTRO that we have been developing in collaboration with NVIDIA for formally characterizing the performance and energy-impact of dataflows in DNNs today. MAESTRO can be used at design-time, for providing quick first-order metrics at design-time when hardware resources (buffers and interconnects) are being allocated on-chip, and compile-time when different layers need to be optimally mapped for high utilization and energy-efficiency. Finally, we will present the micro-architecture of an open-source DNN accelerator called MAERI that is equipped to adaptively change the dataflow depending on the DNN layer currently being mapped by levering a runtime reconfigurable interconnection fabric.
  • Item
    CRNCH Rogues Gallery Update: A Community Core for Novel Computing Platforms
    (Georgia Institute of Technology, 2018-11-02) Riedy, Jason ; Young, Jeffrey
    In one classic sense a rogue is someone who goes their own way, who breaks away from the crowd. The CRNCH Rogues Gallery aims to support computer architecture rogues by being a physical and virtual space providing access to novel computing architectures. Researchers find applications, and architects discover what happens when their prototypes hit reality. Our goals are to help kick-start software ecosystems, train students in novel system evaluation and use, and provide rapid feedback to architects. By exposing students and researchers to this set of unique hardware, we foster cross-cutting discussions about hardware designs that will drive future performance improvements in computing long after the Moore's Law era of "cheap transistors" ends.
  • Item
    CRNCH Summit Opening Remarks
    (Georgia Institute of Technology, 2018-11-02) Galil, Zvi ; Sarkar, Vivek
    Welcome by Dean Zvi Galil and Vivek Sarkar, Professor and CRNCH Co-director. An all-day summit featuring plenary talks on the future of computing and panel discussions.
  • Item
    An Overview of AI Initiative and Research at ORNL
    (Georgia Institute of Technology, 2018-11-02) Womble, David
    An all-day summit featuring plenary talks on the future of computing and panel discussions.