[go: nahoru, domu]

Jump to content

Array programming: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Added links to the languages.
m →‎Languages: MOS:FIRSTABBReviation clarify, define before WP:ABBR in parentheses. WP:LINKs: update-standardizes, needless WP:PIPEs > WP:NOPIPEs.
(28 intermediate revisions by 25 users not shown)
Line 1: Line 1:
{{Short description|Applying operations to whole sets of values simultaneously}}
{{Programming paradigms}}
In [[computer science]], '''array programming''' refers to solutions which allow the application of operations to an entire set of values at once. Such solutions are commonly used in [[computational science|scientific]] and [[engineering]] settings.
In [[computer science]], '''array programming''' refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in [[computational science|scientific]] and engineering settings.


Modern programming languages that support array programming (also known as [[vector (computing)|vector]] or '''multidimensional''' languages) have been engineered specifically to generalize operations on [[scalar (computing)|scalar]]s to apply transparently to [[vector (geometric)|vector]]s, [[matrix (mathematics)|matrices]], and higher-dimensional arrays. These include [[APL (programming language)|APL]], [[J (programming language)|J]], [[Fortran 90]], [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl Data Language|Perl Data Language (PDL)]], and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a ''vectorized'' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |name-list-style=amp |journal=Computing in Science and Engineering |volume=13 |issue=2 |pages=22–30 |publisher=IEEE |year=2011 |doi=10.1109/mcse.2011.37|bibcode=2011CSE....13b..22V |arxiv=1102.1523 |s2cid=16907816 }}</ref> regardless of whether it is executed on a [[vector processor]], which implements vector instructions. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon{{Example needed|date=September 2021}} to find array programming language [[one-liner program|one-liners]] that require several pages of object-oriented code.
Modern programming languages that support array programming (also known as [[vector (data structure)|vector]] or [[multidimensional analysis|multidimensional]] languages) have been engineered specifically to generalize operations on [[scalar (computing)|scalar]]s to apply transparently to [[vector (geometric)|vector]]s, [[matrix (mathematics)|matrices]], and higher-dimensional arrays. These include [[APL (programming language)|APL]], [[J (programming language)|J]], [[Fortran]], [[MATLAB]], [[Analytica (software)|Analytica]], [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl Data Language|Perl Data Language (PDL)]]. In these languages, an operation that operates on entire arrays can be called a ''vectorized'' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |name-list-style=amp |journal=Computing in Science and Engineering |volume=13 |issue=2 |pages=22–30 |publisher=IEEE |year=2011 |doi=10.1109/mcse.2011.37|bibcode=2011CSE....13b..22V |arxiv=1102.1523 |s2cid=16907816 }}</ref> regardless of whether it is executed on a [[vector processor]], which implements vector instructions. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon{{Example needed|date=September 2021}} to find array programming language [[one-liner program|one-liners]] that require several pages of object-oriented code.


==Concepts of array==
==Concepts of array==
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a [[high-level programming language|high-level programming]] model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a [[high-level programming language|high-level programming]] model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.


[[Kenneth E. Iverson]] described the rationale behind array programming (actually referring to APL) as follows:<ref>{{cite journal |author= Iverson, K. E. |title= Notations as a Tool of Thought. |journal= Communications of the ACM |volume= 23 |issue= 8 |pages= 444–465 |year= 1980 |doi= 10.1145/358896.358899 |author-link= Kenneth E. Iverson |doi-access= free }}</ref>
[[Kenneth E. Iverson]] described the rationale behind array programming (actually referring to APL) as follows:<ref>{{cite journal |last= Iverson |first=K. E. |title= Notation as a Tool of Thought |journal= Communications of the ACM |volume= 23 |issue= 8 |pages= 444–465 |year= 1980 |doi= 10.1145/358896.358899 |author-link= Kenneth E. Iverson |doi-access= free }}</ref>
{{quote|most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician.
{{quote|most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician.
The thesis is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
The thesis is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Line 21: Line 21:


==Uses==
==Uses==
Array programming is very well suited to [[implicit parallelization]]; a topic of much research nowadays. Further, [[Intel]] and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from [[MMX (instruction set)|MMX]] and continuing through [[SSSE3]] and [[3DNow!]], which include rudimentary [[SIMD]] array capabilities. Array processing is distinct from [[parallel computing|parallel processing]] in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones ([[MIMD]]) to be solved piecemeal by numerous processors. Processors with two or more cores are increasingly common today.
Array programming is very well suited to [[implicit parallelization]]; a topic of much research nowadays. Further, [[Intel]] and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from [[MMX (instruction set)|MMX]] and continuing through [[SSSE3]] and [[3DNow!]], which include rudimentary [[Single instruction, multiple data|SIMD]] array capabilities. This has continued into the 2020s with instruction sets such as [[AVX-512]], making modern CPUs sophisticated vector processors. Array processing is distinct from [[parallel computing|parallel processing]] in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones ([[Multiple instruction, multiple data|MIMD]]) to be solved piecemeal by numerous processors. Processors with [[Multi-core processor|multiple cores]] and [[Graphics processing unit|GPU]]s with thousands of [[General-purpose computing on graphics processing units|general computing cores]] are common as of 2023.


==Languages==
==Languages==
The canonical examples of array programming languages are [[Fortran]], [[APL (programming language)|APL]], and [[J (programming language)|J]]. Others include: [[A+ (programming language)|A+]], [[Analytica (software)|Analytica]], [[Chapel (programming language)|Chapel]], [[IDL (programming language)|IDL]], [[Julia (programming language)|Julia]], [[K (programming language)|K]], Klong, [[Q (programming language from Kx Systems)|Q]], [[MATLAB]], [[NumPy]], [[GNU Octave]], [[Perl Data Language|PDL]], [[R (programming language)|R]], [[S-Lang (programming language)|S-Lang]], [[SAC programming language|SAC]], [[Nial programming language|Nial]], [[ZPL (programming language)|ZPL]] and [[TI-BASIC]].
The canonical examples of array programming languages are [[Fortran]], [[APL (programming language)|APL]], and [[J (programming language)|J]]. Others include: [[A+ (programming language)|A+]], [[Analytica (software)|Analytica]], [[Chapel (programming language)|Chapel]], [[IDL (programming language)|IDL]], [[Julia (programming language)|Julia]], [[K (programming language)|K]], Klong, [[Q (programming language from Kx Systems)|Q]], [[MATLAB]], [[GNU Octave]], [[Scilab]], [[FreeMat]], [[Perl Data Language]] (PDL), [[R (programming language)|R]], [[Raku (programming language)|Raku]], [[S-Lang]], [[SAC programming language|SAC]], [[Nial]], [[ZPL (programming language)|ZPL]], [[Futhark (programming language)|Futhark]], and [[TI-BASIC]].


===Scalar languages===
===Scalar languages===
Line 43: Line 43:
</syntaxhighlight>
</syntaxhighlight>


While scalar languages like C do not have native array programming elements as part of the language proper, this does not mean programs written in these languages never take advantage of the underlying techniques of vectorization (i.e., utilizing a CPU's [[SIMD|vector-based instructions]] if it has them or by using multiple CPU cores). Some C compilers like [[GNU Compiler Collection|GCC]] at some optimization levels detect and vectorize sections of code that its heuristics determine would benefit from it. Another approach is given by the [[OpenMP]] API, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores.
While scalar languages like C do not have native array programming elements as part of the language proper, this does not mean programs written in these languages never take advantage of the underlying techniques of vectorization (i.e., utilizing a CPU's [[Single instruction, multiple data|vector-based instructions]] if it has them or by using multiple CPU cores). Some C compilers like [[GNU Compiler Collection|GCC]] at some optimization levels detect and vectorize sections of code that its heuristics determine would benefit from it. Another approach is given by the [[OpenMP]] API, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores.


===Array languages===
===Array languages===
Line 187: Line 187:
</syntaxhighlight>
</syntaxhighlight>


Both MATLAB and GNU Octave natively support [[linear algebra]] operations such as matrix multiplication, [[matrix inversion]], and the numerical solution of [[system of linear equations]], even using the [[Moore–Penrose pseudoinverse]].<ref>{{cite web |title= GNU Octave Manual. Arithmetic Operators. |url= https://www.gnu.org/software/octave/doc/interpreter/Arithmetic-Ops.html |access-date= 2011-03-19}}</ref><ref>{{cite web |title= MATLAB documentation. Arithmetic Operators. |url= http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html |access-date= 2011-03-19}}</ref>
Both MATLAB and GNU Octave natively support [[linear algebra]] operations such as matrix multiplication, [[matrix inversion]], and the numerical solution of [[system of linear equations]], even using the [[Moore–Penrose pseudoinverse]].<ref>{{cite web |title= GNU Octave Manual. Arithmetic Operators. |url= https://www.gnu.org/software/octave/doc/interpreter/Arithmetic-Ops.html |access-date= 2011-03-19}}</ref><ref>{{cite web |title= MATLAB documentation. Arithmetic Operators. |url= http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html |access-date= 2011-03-19 |archive-date= 2010-09-07 |archive-url= https://web.archive.org/web/20100907074906/http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html |url-status= dead }}</ref>


The [[Nial]] example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If <code>a</code> is a row vector of size [1 n] and <code>b</code> is a corresponding column vector of size [n 1].
The [[Nial]] example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If <code>a</code> is a row vector of size [1 n] and <code>b</code> is a corresponding column vector of size [n 1].


a * b;
a * b;

By contrast, the [[entrywise product]] is implemented as:

a .* b;


The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator <code>(:)</code>, which reshapes a given matrix into a column vector, and the [[transpose]] operator <code>'</code>:
The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator <code>(:)</code>, which reshapes a given matrix into a column vector, and the [[transpose]] operator <code>'</code>:
Line 206: Line 210:
====R====
====R====
The R language supports [[array paradigm]] by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:
The R language supports [[array paradigm]] by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:
<syntaxhighlight lang="r">
<syntaxhighlight lang="rout">
> A <- matrix(1:6, nrow=2) !!this has nrow=2 ... and A has 2 rows
> A <- matrix(1:6, nrow=2) # !!this has nrow=2 ... and A has 2 rows
> A
> A
[,1] [,2] [,3]
[,1] [,2] [,3]
Line 232: Line 236:
[1,] 30 21
[1,] 30 21
[2,] 42 30
[2,] 42 30
</syntaxhighlight>

====Raku====
Raku supports the array paradigm via its Metaoperators.<ref>{{cite web |url=https://docs.raku.org/language/operators#Metaoperators |title=Metaoperators section of Raku Operator documentation}}</ref> The following example demonstrates the addition of arrays @a and @b using the Hyper-operator in conjunction with the plus operator.

<syntaxhighlight lang="raku">
[0] > my @a = [[1,1],[2,2],[3,3]];
[[1 1] [2 2] [3 3]]

[1] > my @b = [[4,4],[5,5],[6,6]];
[[4 4] [5 5] [6 6]]

[2] > @a »+« @b;
[[5 5] [7 7] [9 9]]
</syntaxhighlight>
</syntaxhighlight>


Line 243: Line 261:


If the system is overdetermined – so that <code>A</code> has more rows than columns – the pseudoinverse <code>A<sup>+</sup></code> (in MATLAB and GNU Octave languages: <code>pinv(A)</code>) can replace the inverse <code>A<sup>−1</sup></code>, as follows:
If the system is overdetermined – so that <code>A</code> has more rows than columns – the pseudoinverse <code>A<sup>+</sup></code> (in MATLAB and GNU Octave languages: <code>pinv(A)</code>) can replace the inverse <code>A<sup>−1</sup></code>, as follows:
:<code>pinv(A) *(A * x)==pinv(A) * (b)</code>
:{{code|2=matlab|1=pinv(A) *(A * x)==pinv(A) * (b)}}
:<code>(pinv(A) * A)* x ==pinv(A) * b</code> &nbsp; &nbsp; &nbsp; (matrix-multiplication associativity)
:{{code|2=matlab|1=(pinv(A) * A)* x ==pinv(A) * b}} &nbsp; &nbsp; &nbsp; (matrix-multiplication associativity)
:<code>x = pinv(A) * b</code>
:{{code|2=matlab|1=x = pinv(A) * b}}


However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent <code>a * x = b</code>, for which the solution <code>x = a^-1 * b</code> would require two operations instead of the more efficient <code>x = b / a</code>.
However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent <code>a * x = b</code>, for which the solution <code>x = a^-1 * b</code> would require two operations instead of the more efficient <code>x = b / a</code>.
Line 265: Line 283:


==Third-party libraries==
==Third-party libraries==
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In [[C++]] several linear algebra libraries exploit the language's ability to [[operator overloading|overload operators]]. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the [[Armadillo (C++ library)|Armadillo]] and [[Blitz++]] libraries do.<ref>{{cite web |title= Reference for Armadillo 1.1.8. Examples of Matlab/Octave syntax and conceptually corresponding Armadillo syntax. |url= http://arma.sourceforge.net/docs.html#syntax |access-date= 2011-03-19}}</ref><ref>{{cite web |title= Blitz++ User's Guide. 3. Array Expressions. |url= http://www.oonumerics.org/blitz/docs/blitz_3.html#SEC80 |access-date= 2011-03-19 |archive-url= https://web.archive.org/web/20110323013142/http://www.oonumerics.org/blitz/docs/blitz_3.html#SEC80 |archive-date= 2011-03-23 |url-status= dead }}</ref>
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In [[C++]] several linear algebra libraries exploit the language's ability to [[operator overloading|overload operators]]. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the [[NumPy]] extension library to [[Python (programming language)|Python]], [[Armadillo (C++ library)|Armadillo]] and [[Blitz++]] libraries do.<ref>{{cite web |title= Reference for Armadillo 1.1.8. Examples of Matlab/Octave syntax and conceptually corresponding Armadillo syntax. |url= http://arma.sourceforge.net/docs.html#syntax |access-date= 2011-03-19}}</ref><ref>{{cite web |title= Blitz++ User's Guide. 3. Array Expressions. |url= http://www.oonumerics.org/blitz/docs/blitz_3.html#SEC80 |access-date= 2011-03-19 |archive-url= https://web.archive.org/web/20110323013142/http://www.oonumerics.org/blitz/docs/blitz_3.html#SEC80 |archive-date= 2011-03-23 |url-status= dead }}</ref>


==See also==
==See also==
Line 277: Line 295:
*[http://www.nsl.com/ "No stinking loops" programming]
*[http://www.nsl.com/ "No stinking loops" programming]
*[https://web.archive.org/web/20110227013846/http://www.vector.org.uk/archive/v223/smill222.htm Discovering Array Languages]
*[https://web.archive.org/web/20110227013846/http://www.vector.org.uk/archive/v223/smill222.htm Discovering Array Languages]
*[http://www.zareenacademy.com/ "Types of Arrays" programming]

{{Programming language}}
{{Programming paradigms navbox}}
{{Types of programming languages}}


[[Category:Array programming languages| ]]
[[Category:Array programming languages| ]]

Revision as of 22:51, 23 June 2024

In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

Modern programming languages that support array programming (also known as vector or multidimensional languages) have been engineered specifically to generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays. These include APL, J, Fortran, MATLAB, Analytica, Octave, R, Cilk Plus, Julia, Perl Data Language (PDL). In these languages, an operation that operates on entire arrays can be called a vectorized operation,[1] regardless of whether it is executed on a vector processor, which implements vector instructions. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon[example needed] to find array programming language one-liners that require several pages of object-oriented code.

Concepts of array

The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a high-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.

Kenneth E. Iverson described the rationale behind array programming (actually referring to APL) as follows:[2]

most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician.

The thesis is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.

Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.

[...]

Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted since a clear statement of an algorithm can usually be used as a basis from which one may easily derive a more efficient algorithm.

The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts (or scalar quantities), array orientation looks to group data and apply a uniform handling.

Function rank is an important concept to array programming languages in general, by analogy to tensor rank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers). The cross product operation is an example of a vector rank function because it operates on vectors, not scalars. Matrix multiplication is an example of a 2-rank function, because it operates on 2-dimensional objects (matrices). Collapse operators reduce the dimensionality of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension.

Uses

Array programming is very well suited to implicit parallelization; a topic of much research nowadays. Further, Intel and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from MMX and continuing through SSSE3 and 3DNow!, which include rudimentary SIMD array capabilities. This has continued into the 2020s with instruction sets such as AVX-512, making modern CPUs sophisticated vector processors. Array processing is distinct from parallel processing in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones (MIMD) to be solved piecemeal by numerous processors. Processors with multiple cores and GPUs with thousands of general computing cores are common as of 2023.

Languages

The canonical examples of array programming languages are Fortran, APL, and J. Others include: A+, Analytica, Chapel, IDL, Julia, K, Klong, Q, MATLAB, GNU Octave, Scilab, FreeMat, Perl Data Language (PDL), R, Raku, S-Lang, SAC, Nial, ZPL, Futhark, and TI-BASIC.

Scalar languages

In scalar languages such as C and Pascal, operations apply only to single values, so a+b expresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious.

for (i = 0; i < n; i++)
    for (j = 0; j < n; j++)
        a[i][j] += b[i][j];

In array-based languages, for example in Fortran, the nested for-loop above can be written in array-format in one line,

a = a + b

or alternatively, to emphasize the array nature of the objects,

a(:,:) = a(:,:) + b(:,:)

While scalar languages like C do not have native array programming elements as part of the language proper, this does not mean programs written in these languages never take advantage of the underlying techniques of vectorization (i.e., utilizing a CPU's vector-based instructions if it has them or by using multiple CPU cores). Some C compilers like GCC at some optimization levels detect and vectorize sections of code that its heuristics determine would benefit from it. Another approach is given by the OpenMP API, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores.

Array languages

In array languages, operations are generalized to apply to both scalars and arrays. Thus, a+b expresses the sum of two scalars if a and b are scalars, or the sum of two arrays if they are arrays.

An array language simplifies programming but possibly at a cost known as the abstraction penalty.[3][4][5] Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally most efficient code. (For example, additions of other elements of the same array may be subsequently encountered during the same execution, causing unnecessary repeated lookups.) Even the most sophisticated optimizing compiler would have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimize overhead).

Ada

The previous C code would become the following in the Ada language,[6] which supports array-programming syntax.

A := A + B;

APL

APL uses single character Unicode symbols with no syntactic sugar.

A  A + B

This operation works on arrays of any rank (including rank 0), and on a scalar and an array. Dyalog APL extends the original language with augmented assignments:

A + B

Analytica

Analytica provides the same economy of expression as Ada.

A := A + B;

BASIC

Dartmouth BASIC had MAT statements for matrix and array manipulation in its third edition (1966).

DIM A(4),B(4),C(4)
MAT A = 1
MAT B = 2 * A
MAT C = A + B
MAT PRINT A,B,C

Mata

Stata's matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions.

. mata:

: A = (1,2,3) \(4,5,6)

: A
       1   2   3
    +-------------+
  1 |  1   2   3  |
  2 |  4   5   6  |
    +-------------+

: B = (2..4) \(1..3)

: B
       1   2   3
    +-------------+
  1 |  2   3   4  |
  2 |  1   2   3  |
    +-------------+

: C = J(3,2,1)           // A 3 by 2 matrix of ones

: C
       1   2
    +---------+
  1 |  1   1  |
  2 |  1   1  |
  3 |  1   1  |
    +---------+

: D = A + B

: D
       1   2   3
    +-------------+
  1 |  3   5   7  |
  2 |  5   7   9  |
    +-------------+

: E = A*C

: E
        1    2
    +-----------+
  1 |   6    6  |
  2 |  15   15  |
    +-----------+

: F = A:*B

: F
        1    2    3
    +----------------+
  1 |   2    6   12  |
  2 |   4   10   18  |
    +----------------+

: G = E :+ 3

: G
        1    2
    +-----------+
  1 |   9    9  |
  2 |  18   18  |
    +-----------+

: H = F[(2\1), (1, 2)]    // Subscripting to get a submatrix of F and

:                         // switch row 1 and 2
: H
        1    2
    +-----------+
  1 |   4   10  |
  2 |   2    6  |
    +-----------+

: I = invsym(F'*F)        // Generalized inverse (F*F^(-1)F=F) of a

:                         // symmetric positive semi-definite matrix
: I
[symmetric]
                 1             2             3
    +-------------------------------------------+
  1 |            0                              |
  2 |            0          3.25                |
  3 |            0         -1.75   .9444444444  |
    +-------------------------------------------+

: end

MATLAB

The implementation in MATLAB allows the same economy allowed by using the Fortran language.

A = A + B;

A variant of the MATLAB language is the GNU Octave language, which extends the original language with augmented assignments:

A += B;

Both MATLAB and GNU Octave natively support linear algebra operations such as matrix multiplication, matrix inversion, and the numerical solution of system of linear equations, even using the Moore–Penrose pseudoinverse.[7][8]

The Nial example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If a is a row vector of size [1 n] and b is a corresponding column vector of size [n 1].

a * b;

By contrast, the entrywise product is implemented as:

a .* b;

The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator (:), which reshapes a given matrix into a column vector, and the transpose operator ':

A(:)' * B(:);

rasql

The rasdaman query language is a database-oriented array-programming language. For example, two arrays could be added with the following query:

SELECT A + B
FROM   A, B

R

The R language supports array paradigm by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:

> A <- matrix(1:6, nrow=2)                             # !!this has nrow=2 ... and A has 2 rows
> A
     [,1] [,2] [,3]
[1,]    1    3    5
[2,]    2    4    6
> B <- t( matrix(6:1, nrow=2) )  # t() is a transpose operator                           !!this has nrow=2 ... and B has 3 rows --- a clear contradiction to the definition of A
> B
     [,1] [,2]
[1,]    6    5
[2,]    4    3
[3,]    2    1
> C <- A %*% B
> C
     [,1] [,2]
[1,]   28   19
[2,]   40   28
> D <- C + 1
> D
     [,1] [,2]
[1,]   29   20
[2,]   41   29
> D + c(1, 1)  # c() creates a vector
     [,1] [,2]
[1,]   30   21
[2,]   42   30

Raku

Raku supports the array paradigm via its Metaoperators.[9] The following example demonstrates the addition of arrays @a and @b using the Hyper-operator in conjunction with the plus operator.

[0] > my @a = [[1,1],[2,2],[3,3]];
[[1 1] [2 2] [3 3]]

[1] > my @b = [[4,4],[5,5],[6,6]];
[[4 4] [5 5] [6 6]]

[2] > @a »+« @b;
[[5 5] [7 7] [9 9]]

Mathematical reasoning and language notation

The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the (determinant of the) coefficient (matrix) A is not null then it is possible to solve the (vectorial) equation A * x = b by left-multiplying both sides by the inverse of A: A−1 (in both MATLAB and GNU Octave languages: A^-1). The following mathematical statements hold when A is a full rank square matrix:

A^-1 *(A * x)==A^-1 * (b)
(A^-1 * A)* x ==A^-1 * b       (matrix-multiplication associativity)
x = A^-1 * b

where == is the equivalence relational operator. The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors).

If the system is overdetermined – so that A has more rows than columns – the pseudoinverse A+ (in MATLAB and GNU Octave languages: pinv(A)) can replace the inverse A−1, as follows:

pinv(A) *(A * x)==pinv(A) * (b)
(pinv(A) * A)* x ==pinv(A) * b       (matrix-multiplication associativity)
x = pinv(A) * b

However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent a * x = b, for which the solution x = a^-1 * b would require two operations instead of the more efficient x = b / a. The problem is that generally matrix multiplications are not commutative as the extension of the scalar solution to the matrix case would require:

(a * x)/ a ==b / a
(x * a)/ a ==b / a       (commutativity does not hold for matrices!)
x * (a / a)==b / a       (associativity also holds for matrices)
x = b / a

The MATLAB language introduces the left-division operator \ to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness:

A \ (A * x)==A \ b
(A \ A)* x ==A \ b       (associativity also holds for matrices, commutativity is no more required)
x = A \ b

This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such as ATLAS or LAPACK.[10]

Returning to the previous quotation of Iverson, the rationale behind it should now be evident:

it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter. Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.

Third-party libraries

The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In C++ several linear algebra libraries exploit the language's ability to overload operators. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the NumPy extension library to Python, Armadillo and Blitz++ libraries do.[11][12]

See also

References

  1. ^ Stéfan van der Walt; S. Chris Colbert & Gaël Varoquaux (2011). "The NumPy array: a structure for efficient numerical computation". Computing in Science and Engineering. 13 (2). IEEE: 22–30. arXiv:1102.1523. Bibcode:2011CSE....13b..22V. doi:10.1109/mcse.2011.37. S2CID 16907816.
  2. ^ Iverson, K. E. (1980). "Notation as a Tool of Thought". Communications of the ACM. 23 (8): 444–465. doi:10.1145/358896.358899.
  3. ^ Surana P (2006). Meta-Compilation of Language Abstractions (Thesis).
  4. ^ Kuketayev. "The Data Abstraction Penalty (DAP) Benchmark for Small Objects in Java". Archived from the original on 2009-01-11. Retrieved 2008-03-17.
  5. ^ Chatzigeorgiou; Stephanides (2002). "Evaluating Performance and Power Of Object-Oriented Vs. Procedural Programming Languages". In Blieberger; Strohmeier (eds.). Proceedings - 7th International Conference on Reliable Software Technologies - Ada-Europe'2002. Springer. p. 367. ISBN 978-3-540-43784-0.
  6. ^ Ada Reference Manual: G.3.1 Real Vectors and Matrices
  7. ^ "GNU Octave Manual. Arithmetic Operators". Retrieved 2011-03-19.
  8. ^ "MATLAB documentation. Arithmetic Operators". Archived from the original on 2010-09-07. Retrieved 2011-03-19.
  9. ^ "Metaoperators section of Raku Operator documentation".
  10. ^ "GNU Octave Manual. Appendix G Installing Octave". Retrieved 2011-03-19.
  11. ^ "Reference for Armadillo 1.1.8. Examples of Matlab/Octave syntax and conceptually corresponding Armadillo syntax". Retrieved 2011-03-19.
  12. ^ "Blitz++ User's Guide. 3. Array Expressions". Archived from the original on 2011-03-23. Retrieved 2011-03-19.