Challenges of GPU-aware Communication in MPI

2020 Workshop on Exascale MPI (ExaMPI)(2020)

引用 11|浏览38
暂无评分
摘要
GPUs are increasingly popular in HPC systems and applications. However, the communication bottleneck between GPUs, distributed across HPC nodes within a cluster, has limited achievable scalability of GPU-centric applications. Advances in inter-node GPU communication such as NVIDIA's GPUDirect have made great strides in addressing this issue. The added software development complexity has been addressed by simplified GPU programming paradigms such as Unified or Managed Memory. To understand the performance of these new features, new benchmarks were developed. Unfortunately, these benchmark efforts do not include correctness checking and certain messaging patterns used in applications. In this paper we highlight important gaps in communication benchmarks and motivate a methodology to help application developers understand the performance tradeoffs of different data movement options. Furthermore, we share systems tuning and deployment experiences across different GPU-aware MPI implementations. In particular, we demonstrate correctness testing is needed along with performance testing through modifications to an existing benchmark. In addition, we present a case study where existing benchmarks fail to characterize how data is moved within SW4, a seismic wave application, and create a benchmark to model this behavior. Finally, we motivate the need for an application-inspired benchmark methodology to assess system performance and guide application programmers on how to use the system more efficiently.
更多
查看译文
关键词
Benchmarking tools,Computer performance,General Purpose Graphics Processing Unit (GPGPU),Heterogenous systems,High performance computing,Interconnection networks,Message Passing Interface (MPI),Protocol integrity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要