Title:
TABLA: A Unified Template-based Framework for Accelerating Statistical Machine Learning
TABLA: A Unified Template-based Framework for Accelerating Statistical Machine Learning
Authors
Mahajan, Divya
Park, Jongse
Amaro, Emmanuel
Sharma, Hardik
Yazdanbakhsh, Amir
Kim, Joon
Esmaeilzadeh, Hadi
Park, Jongse
Amaro, Emmanuel
Sharma, Hardik
Yazdanbakhsh, Amir
Kim, Joon
Esmaeilzadeh, Hadi
Authors
Advisors
Advisors
Associated Organizations
Collections
Supplementary to
Permanent Link
Abstract
A growing number of commercial and enterprise systems increasingly
rely on compute-intensive machine learning algorithms. While the demand for these compute-intensive applications is growing, the performance benefits from general-purpose platforms are diminishing. To accommodate the needs of machine
learning algorithms, Field Programmable Gate Arrays (FPGAs) provide a promising path forward and represent an intermediate point between the efficiency of ASICs and the programmability of general-purpose processors. However, acceleration with FPGAs still requires long design cycles and extensive expertise in
hardware design. To tackle this challenge, instead of designing an
accelerator for machine learning algorithms, we develop TABLA, a framework that generates accelerators for a class of machine learning algorithms. The key is to identify the commonalities
across a wide range of machine learning algorithms and utilize
this commonality to provide a high-level abstraction for programmers. TABLA leverages the insight that many learning algorithms
can be expressed as stochastic optimization problems. Therefore,
a learning task becomes solving an optimization problem using stochastic gradient descent that minimizes an objective function. The gradient solver is fixed while the objective function changes
for different learning algorithms. TABLA provides a template-based
framework for accelerating this class of learning algorithms. With TABLA, the developer uses a high-level language to
only specify the learning model as the gradient of the objective function. TABLA then automatically generates the synthesizable implementation of the accelerator for FPGA realization. We use TABLA to generate accelerators for ten different learning
task that are implemented on a Xilinx Zynq FPGA platform. We rigorously compare the benefits of the FPGA acceleration
to both multicore CPUs (ARMCortex A15 and Xeon E3) and to many-core GPUs (Tegra K1, GTX 650 Ti, and Tesla K40) using real hardware measurements. TABLA-generated accelerators provide 15.0x and 2.9x average speedup over the ARM and the Xeon processors, respectively. These accelerator provide
22.7x, 53.7x, and 30.6x higher performance-per-Watt compare
to Tegra, GTX 650, and Tesla, respectively. These benefits are achieved while the programmers write less than 50 lines of code.
Sponsor
Date Issued
2015
Extent
Resource Type
Text
Resource Subtype
Technical Report