Array programming
In computer science, array programming refers to solutions which allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.
Modern programming languages that support array programming have been engineered specifically to generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays. These include APL, J, Fortran 90, Mata, MATLAB, Analytica, TK Solver, Octave, R, Cilk Plus, Julia, Perl Data Language, Wolfram Language, and the NumPy extension to Python. In these languages, an operation that operates on entire arrays can be called a vectorized operation, regardless of whether it is executed on a vector processor or not.
Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon to find array programming language one-liners that require more than a couple of pages of object-oriented code.
Concepts of array
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a high-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.Kenneth E. Iverson described the rationale behind array programming as follows:
The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts, array orientation looks to group data and apply a uniform handling.
Function rank is an important concept to array programming languages in general, by analogy to tensor rank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data. The cross product operation is an example of a vector rank function because it operates on vectors, not scalars. Matrix multiplication is an example of a 2-rank function, because it operates on 2-dimensional objects. Collapse operators reduce the dimensionality of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension.
Uses
Array programming is very well suited to implicit parallelization; a topic of much research nowadays. Further, Intel and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from MMX and continuing through SSSE3 and 3DNow!, which include rudimentary SIMD array capabilities. Array processing is distinct from parallel processing in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones to be solved piecemeal by numerous processors. Processors with two or more cores are increasingly common today.Languages
The canonical examples of array programming languages are Fortran, APL, and J. Others include: A+, Analytica, Chapel, IDL, Julia, K, Klong, Q, Mata, Wolfram Language, MATLAB, MOLSF, NumPy, GNU Octave, PDL, R, S-Lang, SAC, Nial and ZPL.Scalar languages
In scalar languages such as C and Pascal, operations apply only to single values, so a+b expresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious.for
for
a += b;
In array-based languages, for example in Fortran, the nested for-loop above can be written in array-format in one line,
a = a + b
or alternatively, to emphasize the array nature of the objects,
a = a + b
Array languages
In array languages, operations are generalized to apply to both scalars and arrays. Thus, a+b expresses the sum of two scalars if a and b are scalars, or the sum of two arrays if they are arrays.An array language simplifies programming but possibly at a cost known as the abstraction penalty. Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally most efficient code..
Ada
The previous C code would become the following in the Ada language, which supports array-programming syntax.A := A + B;
APL
uses single character Unicode symbols with no syntactic sugar.A ← A + B
This operation works on arrays of any rank, and on a scalar and an array. Dyalog APL extends the original language with augmented assignments:
A +← B
Analytica
Analytica provides the same economy of expression as Ada.
A := A + B;
BASIC
had MAT statements for matrix and array manipulation in its third edition.DIM A,B,C
MAT A = 1
MAT B = 2 * A
MAT C = A + B
MAT PRINT A,B,C
Mata
's matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions.. mata:
1 2 3
+-------------+
1 | 1 2 3 |
2 | 4 5 6 |
+-------------+
1 2 3
+-------------+
1 | 2 3 4 |
2 | 1 2 3 |
+-------------+
1 2
+---------+
1 | 1 1 |
2 | 1 1 |
3 | 1 1 |
+---------+
1 2 3
+-------------+
1 | 3 5 7 |
2 | 5 7 9 |
+-------------+
1 2
+-----------+
1 | 6 6 |
2 | 15 15 |
+-----------+
1 2 3
+----------------+
1 | 2 6 12 |
2 | 4 10 18 |
+----------------+
1 2
+-----------+
1 | 9 9 |
2 | 18 18 |
+-----------+
1 2
+-----------+
1 | 4 10 |
2 | 2 6 |
+-----------+
1 2 3
+-------------------------------------------+
1 | 0 |
2 | 0 3.25 |
3 | 0 -1.75 .9444444444 |
+-------------------------------------------+
MATLAB
The implementation in MATLAB allows the same economy allowed by using the Fortran language.A = A + B;
A variant of the MATLAB language is the GNU Octave language, which extends the original language with augmented assignments:
A += B;
Both MATLAB and GNU Octave natively support linear algebra operations such as matrix multiplication, matrix inversion, and the numerical solution of system of linear equations, even using the Moore–Penrose pseudoinverse.
The Nial example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If
a
is a row vector of size and b
is a corresponding column vector of size .a * b;
The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator
, which reshapes a given matrix into a column vector, and the transpose operator '
:A' * B;
rasql
The rasdaman query language is a database-oriented array-programming language. For example, two arrays could be added with the following query:
SELECT A + B
FROM A, B
R
The R language supports array paradigm by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar and a vector:> A <- matrix !!this has nrow=2... and A has 2 rows
> A
1 3 5
2 4 6
> B <- t # t is a transpose operator !!this has nrow=2... and B has 3 rows --- a clear contradiction to the definition of A
> B
6 5
4 3
2 1
> C <- A %*% B
> C
28 19
40 28
> D <- C + 1
> D
29 20
41 29
> D + c # c creates a vector
30 21
42 30
Mathematical reasoning and language notation
The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the coefficientA
is not null then it is possible to solve the equation A * x = b
by left-multiplying both sides by the inverse of A
: A−1
. The following mathematical statements hold when A
is a full rank square matrix:where
is the equivalence relational operator.The previous statements are also valid MATLAB expressions if the third one is executed before the others.
If the system is overdetermined - so that
A
has more rows than columns - the pseudoinverse A+
can replace the inverse A−1
, as follows:However, these solutions are neither the most concise ones nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent
a * x = b
, for which the solution x = a^-1 * b
would require two operations instead of the more efficient x = b / a
.The problem is that generally matrix multiplications are not commutative as the extension of the scalar solution to the matrix case would require:
The MATLAB language introduces the left-division operator
\
to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness:This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such as ATLAS or LAPACK.
Returning to the previous quotation of Iverson, the rationale behind it should now be evident: