Improving scalability of parallel CNN training by adaptively adjusting parameter update frequency

Journal of Parallel and Distributed Computing(2022)

引用 2|浏览42
暂无评分
摘要
Synchronous SGD with data parallelism, the most popular parallelization strategy for CNN training, suffers from the expensive communication cost of averaging gradients among all workers. The iterative parameter updates of SGD cause frequent communications and it becomes the performance bottleneck. In this paper, we propose a lazy parameter update algorithm that adaptively adjusts the parameter update frequency to address the expensive communication cost issue. Our algorithm accumulates the gradients if the difference of the accumulated gradients and the latest gradients is sufficiently small. The less frequent parameter updates reduce the per-iteration communication cost while maintaining the model accuracy. Our experimental results demonstrate that the lazy update method remarkably improves the scalability while maintaining the model accuracy. For ResNet50 training on ImageNet, the proposed algorithm achieves a significantly higher speedup (739.6 on 2048 Cori KNL nodes) as compared to the vanilla synchronous SGD (276.6) while the model accuracy is almost not affected (<0.2% difference). (C) 2021 Elsevier Inc. All rights reserved.
更多
查看译文
关键词
Deep learning,Data parallelism,Communication cost,Parameter update frequency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要