![Square matrices](https://www.english.nina.az/wikipedia/image/aHR0cHM6Ly91cGxvYWQud2lraW1lZGlhLm9yZy93aWtpcGVkaWEvY29tbW9ucy90aHVtYi9iL2JmL0FyYml0cmFyeV9zcXVhcmVfbWF0cml4LmdpZi8zMzBweC1BcmJpdHJhcnlfc3F1YXJlX21hdHJpeC5naWY=.gif )
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.
![image](https://www.english.nina.az/wikipedia/image/aHR0cHM6Ly93d3cuZW5nbGlzaC5uaW5hLmF6L3dpa2lwZWRpYS9pbWFnZS9hSFIwY0hNNkx5OTFjR3h2WVdRdWQybHJhVzFsWkdsaExtOXlaeTkzYVd0cGNHVmthV0V2WTI5dGJXOXVjeTkwYUhWdFlpOWlMMkptTDBGeVltbDBjbUZ5ZVY5emNYVmhjbVZmYldGMGNtbDRMbWRwWmk4eU1qQndlQzFCY21KcGRISmhjbmxmYzNGMVlYSmxYMjFoZEhKcGVDNW5hV1k9LmdpZg==.gif)
Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if is a square matrix representing a rotation (rotation matrix) and is a column vector describing the position of a point in space, the product yields another column vector describing the position of that point after that rotation. If is a row vector, the same transformation can be obtained using , where is the transpose of .
Main diagonal
The entries (i = 1, ..., n) form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements a11 = 9, a22 = 11, a33 = 4, a44 = 10.
The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal.
Special kinds
Name | Example with n = 3 |
---|---|
Diagonal matrix | |
Lower triangular matrix | |
Upper triangular matrix |
Diagonal or triangular matrix
If all entries outside the main diagonal are zero, is called a diagonal matrix. If all entries below (resp. above) the main diagonal are zero,
is called an upper (resp. lower) triangular matrix.
Identity matrix
The identity matrix of size
is the
matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.
It is a square matrix of order
, and also a special kind of diagonal matrix. The term identity matrix refers to the property of matrix multiplication that
for any
matrix
.
Invertible matrix and its inverse
A square matrix is called invertible or non-singular if there exists a matrix
such that
If
exists, it is unique and is called the inverse matrix of
, denoted
.
Symmetric or skew-symmetric matrix
A square matrix that is equal to its transpose, i.e.,
, is a symmetric matrix. If instead
, then
is called a skew-symmetric matrix.
For a complex square matrix , often the appropriate analogue of the transpose is the conjugate transpose
, defined as the transpose of the complex conjugate of
. A complex square matrix
satisfying
is called a Hermitian matrix. If instead
, then
is called a skew-Hermitian matrix.
By the spectral theorem, real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.
Definite matrix
Positive definite | Indefinite |
---|---|
Q(x,y) = 1/4 x2 + y2 | Q(x,y) = 1/4 x2 − 1/4 y2 |
![]() Points such that Q(x, y) = 1 (Ellipse). | ![]() Points such that Q(x, y) = 1 (Hyperbola). |
A symmetric n×n-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors the associated quadratic form given by
takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive. The table at the right shows two possibilities for 2×2 matrices.
Allowing as input two different vectors instead yields the bilinear form associated to A:
Orthogonal matrix
An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse: which entails
where I is the identity matrix.
An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. The special orthogonal group consists of the n × n orthogonal matrices with determinant +1.
The complex analogue of an orthogonal matrix is a unitary matrix.
Normal matrix
A real or complex square matrix is called normal if
. If a real square matrix is symmetric, skew-symmetric, or orthogonal, then it is normal. If a complex square matrix is Hermitian, skew-Hermitian, or unitary, then it is normal. Normal matrices are of interest mainly because they include the types of matrices just listed and form the broadest class of matrices for which the spectral theorem holds.
Operations
Trace
The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors: This is immediate from the definition of matrix multiplication:
Also, the trace of a matrix is equal to that of its transpose, i.e.,
Determinant
![image](https://www.english.nina.az/wikipedia/image/aHR0cHM6Ly93d3cuZW5nbGlzaC5uaW5hLmF6L3dpa2lwZWRpYS9pbWFnZS9hSFIwY0hNNkx5OTFjR3h2WVdRdWQybHJhVzFsWkdsaExtOXlaeTkzYVd0cGNHVmthV0V2WTI5dGJXOXVjeTkwYUhWdFlpOWhMMkUzTDBSbGRHVnliV2x1WVc1MFgyVjRZVzF3YkdVdWMzWm5Mek13TUhCNExVUmxkR1Z5YldsdVlXNTBYMlY0WVcxd2JHVXVjM1puTG5CdVp3PT0ucG5n.png)
The determinant or
of a square matrix
is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in
) or volume (in
) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
The determinant of 2×2 matrices is given by The determinant of 3×3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalizes these two formulae to all dimensions.
The determinant of a product of square matrices equals the product of their determinants: Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices. This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1×1 matrix, which is its unique entry, or even the determinant of a 0×0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.
Eigenvalues and eigenvectors
A number λ and a non-zero vector satisfying
are called an eigenvalue and an eigenvector of
, respectively. The number λ is an eigenvalue of an n×n-matrix A if and only if A − λIn is not invertible, which is equivalent to
The polynomial pA in an indeterminate X given by evaluation of the determinant det(XIn − A) is called the characteristic polynomial of A. It is a monic polynomial of degree n. Therefore the polynomial equation pA(λ) = 0 has at most n different solutions, i.e., eigenvalues of the matrix. They may be complex even if the entries of A are real. According to the Cayley–Hamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix.
See also
- Cartan matrix
Notes
- Brown 1991, Definition I.2.28
- Brown 1991, Definition I.5.13
- Horn & Johnson 1985, Theorem 2.5.6
- Horn & Johnson 1985, Chapter 7
- Horn & Johnson 1985, Theorem 7.2.1
- Horn & Johnson 1985, Example 4.0.6, p. 169
- Artin, Algebra, 2nd edition, Pearson, 2018, section 8.6.
- Brown 1991, Definition III.2.1
- Brown 1991, Theorem III.2.12
- Brown 1991, Corollary III.2.16
- Mirsky 1990, Theorem 1.4.1
- Brown 1991, Theorem III.3.18
- Eigen means "own" in German and in Dutch.
- Brown 1991, Definition III.4.1
- Brown 1991, Definition III.4.9
- Brown 1991, Corollary III.4.10
References
- Brown, William C. (1991), Matrices and vector spaces, New York, NY: Marcel Dekker, ISBN 978-0-8247-8419-5
- Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6
- Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486-66434-7
External links
Media related to Square matrices at Wikimedia Commons
In mathematics a square matrix is a matrix with the same number of rows and columns An n by n matrix is known as a square matrix of order n displaystyle n Any two square matrices of the same order can be added and multiplied A square matrix of order 4 The entries aii displaystyle a ii form the main diagonal of a square matrix For instance the main diagonal of the 4 4 matrix above contains the elements a11 9 a22 11 a33 4 a44 10 Square matrices are often used to represent simple linear transformations such as shearing or rotation For example if R displaystyle R is a square matrix representing a rotation rotation matrix and v displaystyle mathbf v is a column vector describing the position of a point in space the product Rv displaystyle R mathbf v yields another column vector describing the position of that point after that rotation If v displaystyle mathbf v is a row vector the same transformation can be obtained using vRT displaystyle mathbf v R mathsf T where RT displaystyle R mathsf T is the transpose of R displaystyle R Main diagonalThe entries aii displaystyle a ii i 1 n form the main diagonal of a square matrix They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix For instance the main diagonal of the 4 4 matrix above contains the elements a11 9 a22 11 a33 4 a44 10 The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal Special kindsName Example with n 3Diagonal matrix a11000a22000a33 displaystyle begin bmatrix a 11 amp 0 amp 0 0 amp a 22 amp 0 0 amp 0 amp a 33 end bmatrix Lower triangular matrix a1100a21a220a31a32a33 displaystyle begin bmatrix a 11 amp 0 amp 0 a 21 amp a 22 amp 0 a 31 amp a 32 amp a 33 end bmatrix Upper triangular matrix a11a12a130a22a2300a33 displaystyle begin bmatrix a 11 amp a 12 amp a 13 0 amp a 22 amp a 23 0 amp 0 amp a 33 end bmatrix Diagonal or triangular matrix If all entries outside the main diagonal are zero A displaystyle A is called a diagonal matrix If all entries below resp above the main diagonal are zero A displaystyle A is called an upper resp lower triangular matrix Identity matrix The identity matrix In displaystyle I n of size n displaystyle n is the n n displaystyle n times n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0 e g I1 1 I2 1001 In 10 001 0 00 1 displaystyle I 1 begin bmatrix 1 end bmatrix I 2 begin bmatrix 1 amp 0 0 amp 1 end bmatrix ldots I n begin bmatrix 1 amp 0 amp cdots amp 0 0 amp 1 amp cdots amp 0 vdots amp vdots amp ddots amp vdots 0 amp 0 amp cdots amp 1 end bmatrix It is a square matrix of order n displaystyle n and also a special kind of diagonal matrix The term identity matrix refers to the property of matrix multiplication that ImA AIn A displaystyle I m A AI n A for any m n displaystyle m times n matrix A displaystyle A Invertible matrix and its inverse A square matrix A displaystyle A is called invertible or non singular if there exists a matrix B displaystyle B such thatAB BA In displaystyle AB BA I n If B displaystyle B exists it is unique and is called the inverse matrix of A displaystyle A denoted A 1 displaystyle A 1 Symmetric or skew symmetric matrix A square matrix A displaystyle A that is equal to its transpose i e AT A displaystyle A mathsf T A is a symmetric matrix If instead AT A displaystyle A mathsf T A then A displaystyle A is called a skew symmetric matrix For a complex square matrix A displaystyle A often the appropriate analogue of the transpose is the conjugate transpose A displaystyle A defined as the transpose of the complex conjugate of A displaystyle A A complex square matrix A displaystyle A satisfying A A displaystyle A A is called a Hermitian matrix If instead A A displaystyle A A then A displaystyle A is called a skew Hermitian matrix By the spectral theorem real symmetric or complex Hermitian matrices have an orthogonal or unitary eigenbasis i e every vector is expressible as a linear combination of eigenvectors In both cases all eigenvalues are real Definite matrix Positive definite Indefinite 1 4001 displaystyle begin bmatrix 1 4 amp 0 0 amp 1 end bmatrix 1 400 1 4 displaystyle begin bmatrix 1 4 amp 0 0 amp 1 4 end bmatrix Q x y 1 4 x2 y2 Q x y 1 4 x2 1 4 y2Points such that Q x y 1 Ellipse Points such that Q x y 1 Hyperbola A symmetric n n matrix is called positive definite respectively negative definite indefinite if for all nonzero vectors x Rn displaystyle x in mathbb R n the associated quadratic form given by Q x xTAx displaystyle Q mathbf x mathbf x mathsf T A mathbf x takes only positive values respectively only negative values both some negative and some positive values If the quadratic form takes only non negative respectively only non positive values the symmetric matrix is called positive semidefinite respectively negative semidefinite hence the matrix is indefinite precisely when it is neither positive semidefinite nor negative semidefinite A symmetric matrix is positive definite if and only if all its eigenvalues are positive The table at the right shows two possibilities for 2 2 matrices Allowing as input two different vectors instead yields the bilinear form associated to A BA x y xTAy displaystyle B A mathbf x mathbf y mathbf x mathsf T A mathbf y Orthogonal matrix An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors i e orthonormal vectors Equivalently a matrix A is orthogonal if its transpose is equal to its inverse AT A 1 displaystyle A textsf T A 1 which entails ATA AAT I displaystyle A textsf T A AA textsf T I where I is the identity matrix An orthogonal matrix A is necessarily invertible with inverse A 1 AT unitary A 1 A and normal A A AA The determinant of any orthogonal matrix is either 1 or 1 The special orthogonal group SO n displaystyle operatorname SO n consists of the n n orthogonal matrices with determinant 1 The complex analogue of an orthogonal matrix is a unitary matrix Normal matrix A real or complex square matrix A displaystyle A is called normal if A A AA displaystyle A A AA If a real square matrix is symmetric skew symmetric or orthogonal then it is normal If a complex square matrix is Hermitian skew Hermitian or unitary then it is normal Normal matrices are of interest mainly because they include the types of matrices just listed and form the broadest class of matrices for which the spectral theorem holds OperationsTrace The trace tr A of a square matrix A is the sum of its diagonal entries While matrix multiplication is not commutative the trace of the product of two matrices is independent of the order of the factors tr AB tr BA displaystyle operatorname tr AB operatorname tr BA This is immediate from the definition of matrix multiplication tr AB i 1m j 1nAijBji tr BA displaystyle operatorname tr AB sum i 1 m sum j 1 n A ij B ji operatorname tr BA Also the trace of a matrix is equal to that of its transpose i e tr A tr AT displaystyle operatorname tr A operatorname tr A mathrm T Determinant A linear transformation on R2 displaystyle mathbb R 2 given by the indicated matrix The determinant of this matrix is 1 as the area of the green parallelogram at the right is 1 but the map reverses the orientation since it turns the counterclockwise orientation of the vectors to a clockwise one The determinant det A displaystyle det A or A displaystyle A of a square matrix A displaystyle A is a number encoding certain properties of the matrix A matrix is invertible if and only if its determinant is nonzero Its absolute value equals the area in R2 displaystyle mathbb R 2 or volume in R3 displaystyle mathbb R 3 of the image of the unit square or cube while its sign corresponds to the orientation of the corresponding linear map the determinant is positive if and only if the orientation is preserved The determinant of 2 2 matrices is given by det abcd ad bc displaystyle det begin bmatrix a amp b c amp d end bmatrix ad bc The determinant of 3 3 matrices involves 6 terms rule of Sarrus The more lengthy Leibniz formula generalizes these two formulae to all dimensions The determinant of a product of square matrices equals the product of their determinants det AB det A det B displaystyle det AB det A cdot det B Adding a multiple of any row to another row or a multiple of any column to another column does not change the determinant Interchanging two rows or two columns affects the determinant by multiplying it by 1 Using these operations any matrix can be transformed to a lower or upper triangular matrix and for such matrices the determinant equals the product of the entries on the main diagonal this provides a method to calculate the determinant of any matrix Finally the Laplace expansion expresses the determinant in terms of minors i e determinants of smaller matrices This expansion can be used for a recursive definition of determinants taking as starting case the determinant of a 1 1 matrix which is its unique entry or even the determinant of a 0 0 matrix which is 1 that can be seen to be equivalent to the Leibniz formula Determinants can be used to solve linear systems using Cramer s rule where the division of the determinants of two related square matrices equates to the value of each of the system s variables Eigenvalues and eigenvectors A number l and a non zero vector v displaystyle mathbf v satisfying Av lv displaystyle A mathbf v lambda mathbf v are called an eigenvalue and an eigenvector of A displaystyle A respectively The number l is an eigenvalue of an n n matrix A if and only if A lIn is not invertible which is equivalent todet A lI 0 displaystyle det A lambda I 0 The polynomial pA in an indeterminate X given by evaluation of the determinant det XIn A is called the characteristic polynomial of A It is a monic polynomial of degree n Therefore the polynomial equation pA l 0 has at most n different solutions i e eigenvalues of the matrix They may be complex even if the entries of A are real According to the Cayley Hamilton theorem pA A 0 that is the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix See alsoCartan matrixNotesBrown 1991 Definition I 2 28 Brown 1991 Definition I 5 13 Horn amp Johnson 1985 Theorem 2 5 6 Horn amp Johnson 1985 Chapter 7 Horn amp Johnson 1985 Theorem 7 2 1 Horn amp Johnson 1985 Example 4 0 6 p 169 Artin Algebra 2nd edition Pearson 2018 section 8 6 Brown 1991 Definition III 2 1 Brown 1991 Theorem III 2 12 Brown 1991 Corollary III 2 16 Mirsky 1990 Theorem 1 4 1 Brown 1991 Theorem III 3 18 Eigen means own in German and in Dutch Brown 1991 Definition III 4 1 Brown 1991 Definition III 4 9 Brown 1991 Corollary III 4 10ReferencesBrown William C 1991 Matrices and vector spaces New York NY Marcel Dekker ISBN 978 0 8247 8419 5 Horn Roger A Johnson Charles R 1985 Matrix Analysis Cambridge University Press ISBN 978 0 521 38632 6 Mirsky Leonid 1990 An Introduction to Linear Algebra Courier Dover Publications ISBN 978 0 486 66434 7External linksMedia related to Square matrices at Wikimedia Commons