Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
CoRR(2023)
摘要
For efficient neural network inference, it is desirable to achieve
state-of-the-art accuracy with the simplest networks requiring the least
computation, memory, and power. Quantizing networks to lower precision is a
powerful technique for simplifying networks. As each layer of a network may
have different sensitivity to quantization, mixed precision quantization
methods selectively tune the precision of individual layers to achieve a
minimum drop in task performance (e.g., accuracy). To estimate the impact of
layer precision choice on task performance, two methods are introduced: i)
Entropy Approximation Guided Layer selection (EAGL) is fast and uses the
entropy of the weight distribution, and ii) Accuracy-aware Layer Precision
Selection (ALPS) is straightforward and relies on single epoch fine-tuning
after layer precision reduction. Using EAGL and ALPS for layer precision
selection, full-precision accuracy is recovered with a mix of 4-bit and 2-bit
layers for ResNet-50, ResNet-101 and BERT-base transformer networks,
demonstrating enhanced performance across the entire accuracy-throughput
frontier. The techniques demonstrate better performance than existing
techniques in several commensurate comparisons. Notably, this is accomplished
with significantly lesser computational time required to reach a solution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要