Double-ended queue


In computer science, a double-ended queue is an abstract data type that generalizes a queue, for which elements can be added to or removed from either the front or back. It is also often called a head-tail linked list, though properly this refers to a specific data structure [|implementation] of a deque.

Naming conventions

Deque is sometimes written dequeue, but this use is generally deprecated in technical literature or technical writing because dequeue is also a verb meaning "to remove from a queue". Nevertheless, several libraries and some writers, such as Aho, Hopcroft, and Ullman in their textbook Data Structures and Algorithms, spell it dequeue. John Mitchell, author of Concepts in Programming Languages, also uses this terminology.

Distinctions and sub-types

This differs from the queue abstract data type or first in first out list, where elements can only be added to one end and removed from the other. This general data class has some possible sub-types:
Both the basic and most common list types in computing, queues and stacks can be considered specializations of deques, and can be implemented using deques.

Operations

The basic operations on a deque are enqueue and dequeue on either end. Also generally implemented are peek operations, which return the value at that end without dequeuing it.
Names vary between languages; major implementations include:
operationcommon nameAdaC++JavaPerlPHPPythonRubyRustJavaScript
insert element at backinject, snoc, pushAppendpush_backofferLastpusharray_pushappendpushpush_backpush
insert element at frontpush, consPrependpush_frontofferFirstunshiftarray_unshiftappendleftunshiftpush_frontunshift
remove last elementejectDelete_Lastpop_backpollLastpoparray_poppoppoppop_backpop
remove first elementpopDelete_Firstpop_frontpollFirstshiftarray_shiftpopleftshiftpop_frontshift
examine last elementpeekLast_ElementbackpeekLast$arrayendlastback
examine first elementFirst_ElementfrontpeekFirst$arrayresetfirstfront

Implementations

There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a doubly linked list.
The dynamic array approach uses a variant of a dynamic array that can grow from both ends, sometimes called array deques. These array deques have all the properties of a dynamic array, such as constant-time random access, good locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant-time insertion/removal at both ends, instead of just one end. Three common implementations include:
Double-ended queues can also be implemented as a purely functional data structure. Two versions of the implementation exist. The first one, called 'real-time deque, is presented below. It allows the queue to be persistent with operations in worst-case time, but requires lazy lists with memoization. The second one, with no lazy lists nor memoization is presented at the end of the sections. Its amortized time is if the persistency is not used; but the worst-time complexity of an operation is where is the number of elements in the double-ended queue.
Let us recall that, for a list l, |l| denotes its length, that NIL represents an empty list and CONS represents the list whose head is h and whose tail is t. The functions drop and take return the list l without its first i elements, and the first i elements of l, respectively. Or, if |l| < i, they return the empty list and l respectively.
A double-ended queue is represented as a sextuple lenf, f, sf, lenr, r, sr where f is a linked list which contains the front of the queue of length lenf. Similarly, r is a linked list which represents the reverse of the rear of the queue, of length lenr. Furthermore, it is assured that |f| <= 2|r|+1 and |r| <= 2|f|+1 - intuitively, it means that neither the front nor the rear contains more than a third of the list plus one element. Finally, sf and sr are tails of f and of r, they allow to schedule the moment where some lazy operations are forced. Note that, when a double-ended queue contains n elements in the front list and n elements in the rear list, then the inequality invariant remains satisfied after i insertions and d deletions when /2 <= n. That is, at most n/2 operations can happen between each rebalancing.
Intuitively, inserting an element x in front of the double-ended queue lenf, f, sf, lenr, sr leads almost to the double-ended queue lenf+1, CONS, drop, lenr, r, drop, the head and the tail of the double-ended queue lenf, CONS, sf, lenr, r, sr are x and almost lenf-1, f, drop, lenr, r, drop respectively, and the head and the tail of lenf, NIL, NIL, lenr, CONS, drop are x and 0, NIL, NIL, 0, NIL, NIL respectively. The function to insert an element in the rear, or to drop the last element of the double-ended queue, are similar to the above function which deal with the front of the double-ended queue. It is said almost because, after insertion and after an application of tail, the invariant |r| <= 2|f|+1 may not be satisfied anymore. In this case it is required to rebalance the double-ended queue.
In order to avoid an operation with an O costs, the algorithm uses laziness with memoization, and force the rebalancing to be partly done during the following /2 operations, that is, before the following rebalancing. In order to create the scheduling, some auxiliary lazy functions are required. The function rotateRev returns the list f, followed by the list r, and followed by the list a. It is required in this function that |r|-2|f| is 2 or 3. This function is defined by induction as rotateRev=reverse where ++ is the concatenation operation, and by rotateRev=CONS). rotateRev returns the list f followed by the list r reversed. The function rotateDrop which returns f followed by reversed is also required, for j < |f|. It is defined by rotateDrop rotateRev, rotateDrop rotateRev and rotateDrop CONS, drop).
The balancing function can now be defined with

fun balance=
if lenf > 2*lenr+1 then
let val i = div 2
val j = lenf + lenr -i
val f' = take
val r' = rotateDrop
in
else if lenf > 2*lenr+1 then
let val j = div 2
val i = lenf + lenr -j
val r' = take
val f' = rotateDrop
in
else q

Note that, without the lazy part of the implementation, this would be a non-persistent implementation of queue in O amortized time. In this case, the lists sf and sr can be removed from the representation of the double-ended queue.

Language support

's containers provides the generic packages Ada.Containers.Vectors and Ada.Containers.Doubly_Linked_Lists, for the dynamic array and linked list implementations, respectively.
C++'s Standard Template Library provides the class templates std::deque and std::list, for the multiple array and linked list implementations, respectively.
As of Java 6, Java's Collections Framework provides a new interface that provides the functionality of insertion and removal at both ends. It is implemented by classes such as and, providing the dynamic array and linked list implementations, respectively. However, the ArrayDeque, contrary to its name, does not support random access.
Javascript's & Perl's arrays have native support for both removing and adding elements on both ends.
Python 2.4 introduced the collections module with support for . It is implemented using a doubly linked list of fixed-length subarrays.
As of PHP 5.3, PHP's SPL extension contains the 'SplDoublyLinkedList' class that can be used to implement Deque datastructures. Previously to make a Deque structure the array functions array_shift/unshift/pop/push had to be used instead.
GHC's module implements an efficient, functional deque structure in Haskell. The implementation uses 2–3 finger trees annotated with sizes. There are other possibilities to implement purely functional double queues. Kaplan and Tarjan were the first to implement optimal confluently persistent catenable deques. Their implementation was strictly purely functional in the sense that it did not use lazy evaluation. Okasaki simplified the data structure by using lazy evaluation with a bootstrapped data structure and degrading the performance bounds from worst-case to amortized. Kaplan, Okasaki, and Tarjan produced a simpler, non-bootstrapped, amortized version that can be implemented either using lazy evaluation or more efficiently using mutation in a broader but still restricted fashion. Mihaesau and Tarjan created a simpler strictly purely functional implementation of catenable deques, and also a much simpler implementation of strictly purely functional non-catenable deques, both of which have optimal worst-case bounds.
Rust's std::collections includes which implements a double-ended queue using a growable ring buffer.

Complexity

One example where a deque can be used is the A-Steal job scheduling algorithm. This algorithm implements task scheduling for several processors. A separate deque with threads to be executed is maintained for each processor. To execute the next thread, the processor gets the first element from the deque. If the current thread forks, it is put back to the front of the deque and a new thread is executed. When one of the processors finishes execution of its own threads, it can "steal" a thread from another processor: it gets the last element from the deque of another processor and executes it. The steal-job scheduling algorithm is used by Intel's Threading Building Blocks library for parallel programming.