Work stealing


In parallel computing, work stealing is a scheduling strategy for multithreaded computer programs. It solves the problem of executing a dynamically multithreaded computation, one that can "spawn" new threads of execution, on a statically multithreaded computer, with a fixed number of processors. It does so efficiently in terms of execution time, memory usage, and inter-processor communication.
In a work stealing scheduler, each processor in a computer system has a queue of work items to perform. Each work item consists of a series of instructions, to be executed sequentially, but in the course of its execution, a work item may also spawn new work items that can feasibly be executed in parallel with its other work. These new items are initially put on the queue of the processor executing the work item. When a processor runs out of work, it looks at the queues of other processors and "steals" their work items. In effect, work stealing distributes the scheduling work over idle processors, and as long as all processors have work to do, no scheduling overhead occurs.
Work stealing contrasts with work sharing, another popular scheduling approach for dynamic multithreading, where each work item is scheduled onto a processor when it is spawned. Compared to this approach, work stealing reduces the amount of process migration between processors, because no such migration occurs when all processors have work to do.
The idea of work stealing goes back to the implementation of the Multilisp programming language and work on parallel functional programming languages in the 1980s. It is employed in the scheduler for the Cilk programming language, the Java fork/join framework, the.NET Task Parallel Library, and the Rust Tokio runtime.

Execution model

Work stealing is designed for a "strict" fork–join model of parallel computation, which means that a computation can be viewed as a directed acyclic graph with a single source and a single sink. Each node in this graph represents either a fork or a join. Forks produce multiple logically parallel computations, variously called "threads" or "strands". Edges represent serial computation.
As an example, consider the following trivial fork–join program in Cilk-like syntax:
function f:
c ← fork g
d ← h
join
return c + d
function g:
return a × 2
function h:
b ← fork g
c ← a + 1
join
return b + c
The function call gives rise to the following computation graph:
computation.
In the graph, when two edges leave a node, the computations represented by the edge labels are logically parallel: they may be performed either in parallel, or sequentially. The computation may only proceed past a join node when the computations represented by its incoming edges are complete. The work of a scheduler, now, is to assign the computations to processors in a way that makes the entire computation run to completion in the correct order, preferably as fast as possible.

Algorithm

The randomized version of the work stealing algorithm presented by Blumofe and Leiserson maintains several threads of execution and schedules these onto processors. Each of the processors has a double-ended queue of threads. Call the ends of the deque "top" and "bottom".
Each processor that has a current thread to execute, executes the instructions in the thread one by one, until it encounters an instruction that causes one of four "special" behaviors:
Initially, a computation consists of a single thread and is assigned to some processor, while the other processors start off idle. Any processor that becomes idle starts the actual process of work stealing, which means the following:
Note that, in the rule for spawn, Blumofe and Leiserson suggest that the "parent" thread execute its new thread, as if performing a function call. This is called "continuation stealing", because the continuation of the function can be stolen while the spawned thread is executed, and is the scheduling algorithm used in Cilk Plus. It is not the only way to implement work stealing; the alternative strategy is called "child stealing" and is easier to implement as a library, without compiler support. Child stealing is used by Threading Building Blocks, Microsoft's Task Parallel Library and OpenMP, although the latter gives the programmer control over which strategy is used.

Efficiency

Several variants of work stealing have been proposed. The randomized variant due to Blumofe and Leiserson executes a parallel computation in expected time on processors; here, is the work, or the amount of time required to run the computation on a serial computer, and is the span, the amount of time required on an infinitely parallel machine. This means that, in expectation, the time required is at most a constant factor times the theoretical minimum. However, the running time can be exponential in in the worst case. A localized variant, in which a processor attempts to steal back its own work whenever it is free, has also been analyzed theoretically and practically.

Space usage

A computation scheduled by the Blumofe–Leiserson version of work stealing uses stack space, if were the stack usage of the same computation on a single processor, fitting the authors' own earlier definition of space efficiency. This bound requires continuation stealing; in a child stealing scheduler, it does not hold, as can be seen from the following example:
for i = 0 to n:
fork f
join
In a child-stealing implementation, all "forked" calls to are put in a work queue that thus grows to size, which can be made arbitrarily large.

Multiprogramming variant

The work stealing algorithm as outlined earlier, and its analysis, assume a computing environment where a computation is scheduled onto a set of dedicated processors. In a multiprogramming environment, the algorithm must be modified to instead schedule computation tasks onto a pool of worker threads, which in turn are scheduled onto the actual processors by an operating system scheduler. At any given time, the OS scheduler will assign to the work stealing process some number of the processors in the computer, because other processes may be using the remaining processors. In this setting, work stealing with a pool of worker threads has the problem that workers acting as thieves may cause livelock: they may block the execution of workers that would actually spawn useful tasks.
A variant of work stealing has been devised for this situation, which executes a computation in expected time
where is the average number of processors allocated to the computation by the OS scheduler over the computation's running time.
The multiprogramming work-scheduler differs from the traditional version in two respects:
Attempts to improve on the multiprogramming work stealer have focused on cache locality issues and improved queue data structures.

Alternatives

Several scheduling algorithms for dynamically multithreaded computations compete with work stealing. Besides the traditional work sharing approach, there is a scheduler called parallel depth-first that improves on the space bounds of work stealing, as well giving better performance in some situations where the cores of a chip multiprocessor share a cache.