Towards whatever-scale abstractions for data-driven parallelism

user-618b90cce554220b8f25959c(2014)

Cited 1|Views6
No score
Abstract
Increasing diversity in computing systems often requires problems to be solved in quite different ways depending on the workload, data size, and resources available. This diversity is increasingly broad in terms of the organization, communication mechanisms, and performance and cost characteristics of individual machines and clusters. Researchers have thus been motivated to design abstractions that allow programmers to express solutions independently of target execution platforms, enabling programs to scale from small shared memory systems to distributed systems comprising thousands of processors. We call these abstractions “WhateverScale Computing”. In prior work, we have found data-driven parallelism to be a promising approach for solving many problems on shared memory machines. In this paper, we describe ongoing work towards extending our previous abstractions to support data-driven parallelism for Whatever-Scale Computing. We plan to target rack-scale distributed systems. As an intermediate step, we have implemented a runtime system that treats a NUMA shared memory system as if each NUMA domain were a node in a distributed system, using shared memory to implement communication between nodes.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined