When one is first taught what an array is, it is often described as a list of values, but in truth, they are much more. For one thing, arrays can be in higher dimensions that just a 1D list, but more important is the vast range of operations made possible by arrays, and the even wider applications of those.
Array programming aims to take advantage of those opportunities. The idea behind it is to enable generalized operations on scalars to apply transparently to vectors, higher dimensional arrays and matrices. Applying operations simultaneously to an entire set of values instead of explicitly iterating through individual scalar operations in a loop make it possible to work at a higher level. This way, one is able to find and manipulate macro properties of data where individual items are adjacent or otherwise similar in order to group data and handle a huge amount of it uniformly and efficiently. This ideology is very unlike object-orientated approaches which focus on breaking complex programs into their individual parts - here, we aim to craft those parts together and work with the large clumps of data that result.
Many array programming languages are remarkably concise for array operations if compared with equivalent code in a non-array-orientated language; what is a one-liner in the former page require pages of Java code. This very high-level representation is well suited to implicit parallelization - not to be confused with parallel processing! Array processing is distinct from that process of splitting complex problems into smaller ones to be solved by multiple processing. Here, one physical processor handles the operations on the group of elements instead. This is an active area of research and improvement as well.