Zusammenfassung
Quantization techniques can reduce the size of Deep Neural Networks and
improve inference latency and throughput by taking advantage of high throughput
integer instructions. In this paper we review the mathematical aspects of
quantization parameters and evaluate their choices on a wide range of neural
network models for different application domains, including vision, speech, and
language. We focus on quantization techniques that are amenable to acceleration
by processors with high-throughput integer math pipelines. We also present a
workflow for 8-bit quantization that is able to maintain accuracy within 1% of
the floating-point baseline on all networks studied, including models that are
more difficult to quantize, such as MobileNets and BERT-large.
Nutzer