Data Orchestration in Deep Learning Accelerators

Morgan & Claypool 

BUY THIS BOOK

Check if you have access via Synthesis Digital Library
Tushar Krishna, Georgia Institute of Technology
Hyoukjun Kwon, Georgia Institute of Technology
Angshuman Parashar, NVIDIA
Michael Pellauer, NVIDIA
Ananda Samajdar, Georgia Institute of Technology

Modern DNNs have millions of hyper parameters and involve billions of
computations; this necessitates extensive data movement from memory
to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to
external DRAM. 

The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration.
It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is engineers,
researchers, and students interested in designing high-performance and
low-energy accelerators for DNN inference.

You may also like...