Title:
Neuro-general computing an acceleration-approximation approach

Thumbnail Image
Author(s)
Yazdan Bakhsh, Amir
Authors
Advisor(s)
Esmaeilzadeh, Hadi
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to
Abstract
A growing number of commercial and enterprise systems rely on compute and power intensive tasks. While the demand of these tasks is growing, the performance benefits from general-purpose platforms are diminishing. Without continuous performance improvements, grand-challenge applications, such as computer vision, machine learning, and big data analytics may stay out of reach due to their need for significantly higher compute capacity. To address these convoluted challenges, there is a need to move beyond traditional techniques and explore unconventional paradigms in computing. This thesis leverages approximate computing---one of the unconventional yet promising paradigms in computing---to mitigate these challenges and enable the traditional performance improvements in general-purpose platforms. In this thesis, first, I introduce a novel computing paradigm, called neuro-general computing, that trades off the application accuracy in return for significant increase in performance and energy efficiency. This introduced paradigm of computing conjoins two disjoint forms of specializations, namely approximation and acceleration to further improve the efficiency of general-purpose computing. To this end, I leverage the approximability of various emerging applications, such as computer vision, machine learning, financial analysis, and scientific computing, to integrate neuro-computing models within the conventional von Neumann's general-purpose computing platforms. For this introduced neuro-general computing paradigm, I devise a full-stack solution---from circuit up to algorithm---and explore three distinct design points including mixed-mode analog-digital acceleration, GPU, and in-memory acceleration. Then, I study the symbiosis between accelerator design and approximation in the domain of deep convolution neural networks. More specifically, this thesis explores utilizing the complementary specialization techniques of acceleration and approximation in the domain of deep convolutional neural networks. I propose a co-design hardware-software solution, that leverages a unique algorithmic property of deep convolutional neural networks in the aim to cut the computation cost of convolution operation. Finally, I make a transition towards designing accelerator for Generative Adversarial Networks (GANs), the frontier of deep learning models and one of the leading classes of unsupervised learning. GANs are amongst the most promising models to form an upheaval in the domain of AI and machine imagination, hence, designing an accelerator for this promising model of deep networks is of paramount importance. As such, this thesis set to study the unique challenges of accelerating GANs from the architecture perspective. The outcome of this study is a unified MIMD-SIMD accelerator architecture for GANs. The proposed accelerator addresses the computational demands of generative models and delivers significant improvements over conventional deep model accelerators. These significant improvements are achieved without sacrificing the efficiency of the accelerator for conventional deep learning models.
Sponsor
Date Issued
2018-07-30
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI