Communication-avoiding algorithms minimize movement of data within a memory hierarchy for improving its running-time and energy consumption. These minimize the total of two costs : arithmetic and communication. Communication, in this context refers to moving data, either between levels of memory or between multiple processors over a network. It is much more expensive than arithmetic.
Measure of communication = No. of words of data moved = β
⇒ Total running time = γ* + β* From the fact that β >> γ as measured in time and energy, communication cost dominates computation cost. Technological trends indicate that the relative cost of communication is increasing on a variety of platforms, from cloud computing to supercomputers to mobile devices. The report also predicts that gap between DRAM access time and FLOPs will increase 100× over coming decade to balance power usage between processors and DRAM.
Let A, B and C be square matrices of order n x n. The following naive algorithm implements C = C + A * B: for i = 1 to n for j = 1 to n for k = 1 to n C = C + A * B Arithmetic cost : n² for sufficiently large n or O. Rewriting this algorithm with communication cost labelled at each step for i = 1 to n - n² reads for j = 1 to n - n² reads - n³ reads for k = 1 to n C = C + A * B - n² writes Fast memory may be defined as the local processor memory of size M and slow memory may be defined as the DRAM. Communication cost : n³ + 3n² or O Since total running time = γ*O + β*O and β >> γ the communication cost is dominant. The blocked matrix multiplication algorithm reduces this dominant term:
Consider A, B and C to be n/b-by-n/b matrices of b-by-b sub-blocks where b is called the block size; assume 3 b-by-b blocks fit in fast memory. for i = 1 to n/b for j = 1 to n/b - b² × ² = n² reads for k = 1 to n/b - b² × ³ = n³/b reads - b² × ³ = n³/b reads C = C + A * B - - b² × ² = n² writes Communication cost: 2n³/b + 2n² reads/writes << 2n³ arithmetic cost Making b as large possible: 3b2 ≤ M We achieve the following communication lowerbound: 31/2n3/M1/2 + 2n2 or Ω
Previous approaches for reducing communication
Most of the approaches investigated in the past to address this problem rely on scheduling or tuning techniques that aim at overlapping communication with computation. However, this approach can lead to an improvement of at most a factor of two. Ghosting is a different technique for reducing communication, in which a processor stores and computes redundantly data from neighboring processors for future computations. Cache-oblivious algorithms represent a different approach introduced in 1999 for Fast Fourier Transforms, and then extended to graph algorithms, dynamic programming, etc. They were also applied to several operations in linear algebra as dense LU and QR factorizations. The design of architecture specific algorithms is another approach that can be used for reducing the communication in parallel algorithms, and there are many examples in the literature of algorithms that are adapted to a given communication topology.