Sparse matrix

Example of sparse matrix
\left(\begin{smallmatrix}
11 & 22 & 0 & 0 & 0 & 0 & 0 \\
0 & 33 & 44 & 0 & 0 & 0 & 0 \\
0 & 0 & 55 & 66 & 77 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 88 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 99 \\
\end{smallmatrix}\right)
The above sparse matrix contains only 9 nonzero elements, with 26 zero elements.
A sparse matrix obtained when solving a finite element problem in two dimensions. The non-zero elements are shown in black.

In numerical analysis, a sparse matrix is a matrix in which most of the elements are zero. By contrast, if most of the elements are nonzero, then the matrix is considered dense. The fraction of zero elements over the total number of elements in a matrix is called the sparsity (density).

Conceptually, sparsity corresponds to systems which are loosely coupled. Consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls had springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory, which have a low density of significant data or connections.

Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.

When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeroes. Sparse data is by nature more easily compressed and thus require significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.

Storing a sparse matrix

A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element ai,j of the hose that support efficient modification, such as DOK (Dictionary of keys), LIL (List of lists), or COO (Coordinate list). These are typically used to construct the matrices.

Dictionary of keys (DOK)

DOK consists of a dictionary that maps (row, column)-pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.[1]

List of lists (LIL)

LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[2]

Coordinate list (COO)

COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted (by row index, then column index) to improve random access times. This is another format which is good for incremental matrix construction.[3]

Yale

The Yale sparse matrix format stores an initial sparse m × n matrix, M, in row form using three (one-dimensional) arrays (A, IA, JA). Let NNZ denote the number of nonzero entries in M. (Note that zero-based indices shall be used here.)

For example, the matrix

\begin{pmatrix}
0 & 0 & 0 & 0 \\
5 & 8 & 0 & 0 \\
0 & 0 & 3 & 0 \\
0 & 6 & 0 & 0 \\
\end{pmatrix}

is a 4 × 4 matrix with 4 nonzero elements, hence

   A  = [ 5 8 3 6 ]
   IA = [ 0 0 2 3 4 ]
   JA = [ 0 1 2 1 ]

So, in array JA, the element "5" from A has column index 0, "8" and "6" have index 1, and element "3" has index 2.

In this case the Yale representation contains 13 entries, compared to 16 in the original matrix. The Yale format saves on memory only when NNZ < (m (n − 1) − 1) / 2. Another example, the matrix

\begin{pmatrix}
10 & 20 &  0 &  0 &  0 &  0 \\
 0 & 30 &  0 & 40 &  0 &  0 \\
 0 &  0 & 50 & 60 & 70 &  0 \\
 0 &  0 &  0 &  0 &  0 & 80 \\
\end{pmatrix}

is a 4 × 6 matrix (24 entries) with 8 nonzero elements, so

   A  = [ 10 20 30 40 50 60 70 80 ]
   IA = [ 0 2 4 7 8 ]
   JA = [ 0 1 1 3 2 3 4 5 ]

The whole is stored as 21 entries.

Note that in this format, the first value of IA is always zero and the last is always NNZ, so they are in some sense redundant. However, they can make accessing and traversing the array easier for the programmer.

Compressed row Storage (CRS or CSR)

CSR is effectively identical to the Yale Sparse Matrix format, except that the column array is normally stored ahead of the row index array. I.e. CSR is (val, col_ind, row_ptr), where val is an array of the (left-to-right, then top-to-bottom) non-zero values of the matrix; col_ind is the column indices corresponding to the values; and, row_ptr is the list of value indexes where each row starts. The name is based on the fact that row index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, row slicing, and matrix-vector products. See scipy.sparse.csr_matrix.

Compressed sparse column (CSC or CCS)

CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. I.e. CSC is (val, row_ind, col_ptr), where val is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; row_ind is the row indices corresponding to the values; and, col_ptr is the list of val indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. See scipy.sparse.csc_matrix. This is the traditional format for specifying a sparse matrix in MATLAB (via the sparse function).

Special structure

Banded

Main article: Band matrix

An important special type of sparse matrices is band matrix, defined as follows. The lower bandwidth of a matrix A is the smallest number p such that the entry ai,j vanishes whenever i > j + p. Similarly, the upper bandwidth is the smallest number p such that ai,j = 0 whenever i < jp (Golub & Van Loan 1996, §1.2.1). For example, a tridiagonal matrix has lower bandwidth 1 and upper bandwidth 1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.


\left(
\begin{smallmatrix}
  X   &   X   &   X   & \cdot & \cdot & \cdot & \cdot & \\
  X   &   X   & \cdot &   X   &   X   & \cdot & \cdot & \\
  X   & \cdot &   X   & \cdot &   X   & \cdot & \cdot & \\
\cdot &   X   & \cdot &   X   & \cdot &   X   & \cdot & \\
\cdot &   X   &   X   & \cdot &   X   &   X   &   X   & \\
\cdot & \cdot & \cdot &   X   &   X   &   X   & \cdot & \\      
\cdot & \cdot & \cdot & \cdot &   X   & \cdot &   X   & \\              
\end{smallmatrix}
\right)

Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.

By rearranging the rows and columns of a matrix A it may be possible to obtain a matrix A with a lower bandwidth. A number of algorithms are designed for bandwidth minimization.

Diagonal

A very efficient structure for an extreme case of band matrices, the diagonal matrix, is to store just the entries in the main diagonal as a one-dimensional array, so a diagonal n × n matrix requires only n entries.

Symmetric

A symmetric sparse matrix arises as the adjacency matrix of an undirected graph; it can be stored efficiently as an adjacency list.

Reducing fill-in

The fill-in of a matrix are those entries which change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.

There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.

Solving sparse matrix equations

Both iterative and direct methods exist for sparse matrix solving.

Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products Ax_i, where matrix A is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.

See also

References

Further reading

External links