Array programming

In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once.

The level of concision can be dramatic in certain cases: it is not uncommon[example needed] to find array programming language one-liners that require several pages of object-oriented code.

For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.

Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers).

Further, Intel and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from MMX and continuing through SSSE3 and 3DNow!, which include rudimentary SIMD array capabilities.

Others include: A+, Analytica, Chapel, IDL, Julia, K, Klong, Q, MATLAB, GNU Octave, Scilab, FreeMat, Perl Data Language (PDL), R, Raku, S-Lang, SAC, Nial, ZPL, Futhark, and TI-BASIC.

Another approach is given by the OpenMP API, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores.

(For example, additions of other elements of the same array may be subsequently encountered during the same execution, causing unnecessary repeated lookups.)

Even the most sophisticated optimizing compiler would have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimize overhead).

Dyalog APL extends the original language with augmented assignments: Analytica provides the same economy of expression as Ada.

As in the scalar equivalent, if the (determinant of the) coefficient (matrix) A is not null then it is possible to solve the (vectorial) equation A * x = b by left-multiplying both sides by the inverse of A: A−1 (in both MATLAB and GNU Octave languages: A^-1).

The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors).

The problem is that generally matrix multiplications are not commutative as the extension of the scalar solution to the matrix case would require: The MATLAB language introduces the left-division operator \ to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness: This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such as ATLAS or LAPACK.

For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.