Is a zero matrix a diagonal matrix

The identity matrix


Further definitions in connection with matrices are briefly introduced below. For practical use, some special cases of a matrix must be listed here. In the matrix calculation some special cases of matrices are defined. For example, vectors, the zero matrix and the diagonal matrix are special cases of a matrix. A diagonal matrix is ​​a square matrix and is characterized by the fact that all elements that are not on the main diagonal are zero. If all elements on the main diagonal are also 1, this matrix is ​​called Identity matrix designated.
Definition: [identity matrix] The identity matrix $ {\ rm \ bf I} $ is a square matrix whose elements on the main diagonal are 1 and the remaining elements are all 0.
Example The (3,3) -unit matrix is ​​given as an example of an identity matrix: \ begin {equation *} \ mathbf {I} = \ begin {pmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \ end {pmatrix }. \ end {equation *}
The identity matrix is ​​the neutral element in matrix multiplication. For arbitrary matrices $ {\ rm \ bf A _ {(m, m)}} $ the following applies: \ begin {equation *} {\ rm \ bf A \ cdot I = I \ cdot A = A}. \ end {equation *} Thus the matrix $ {\ rm \ bf I} $ corresponds to the number 1 when multiplying the matrix in $ \ mathbb {R} $. In Chapter 1, an inverse element was defined for the multiplication of real numbers. Compare definition \ ref {???}. In $ \ mathbb {R} $ there is an inverse element $ \ alpha ^ {- 1} $ for every $ \ alpha \ neq 0 $. This procedure can be used for the matrix calculation, but only for a given regular square matrix $ \ rm \ bf {A} $% (cf. definition% \ ref {regular matrix}) an inverse matrix $ \ rm \ bf {A ^ {- 1}} $ can be formed.
Definition [Inverse Matrix] Let $ {\ rm \ bf A} $ be a square matrix. Then there is a (also square) matrix $ {\ rm \ bf B} $ for which the following applies: \ begin {equation *} {\ rm \ bf A \ cdot B = B \ cdot A = I}, \ end {equation *} so one calls $ {\ rm \ bf B} $ the inverse matrix of $ {\ rm \ bf A} $ and writes $ {\ rm \ bf A ^ {- 1}} $ for it. The following applies to the inverse matrix: \ begin {equation *} {\ rm \ bf A ^ {- 1} \ cdot A = I}. \ end {equation *}
task
Give examples of square matrices without inverse.
Note:
Take (2,2) -matrices $ \ ne \ rm \ bf O $ with as many zeros as possible and try to construct an inverse.
solution
For example, we are looking for an inverse $ \ mathbf {X ^ {- 1}} $ to the matrix $ X = \ begin {pmatrix} 1 & 0 \ 0 & 0 \ end {pmatrix}. $ To do this, we have to solve the following relationship: $$ \ begin {pmatrix} 1 & 0 \ 0 & 0 \ end {pmatrix} \ cdot \ begin {pmatrix} x_ {11} & x_ {12} \ x_ {21} & x_ {22} \ end { pmatrix} = \ begin {pmatrix} 1 & 0 \ 0 & 1 \ end {pmatrix} $$ $$ \ begin {pmatrix} 1 \ cdot x_ {11} +0 \ cdot x_ {21} & 1 \ cdot x_ {12} +0 \ cdot x_ {22} \ 0 \ cdot x_ {11} +0 \ cdot x_ {21} & 0 \ cdot x_ {12} +0 \ cdot x_ {22} \ end {pmatrix} = \ begin {pmatrix} 1 & 0 \ 0 & 1 \ end {pmatrix} $$ $$ \ begin {pmatrix} x_ {11} & x_ {12} \ 0 & 0 \ end {pmatrix} = \ begin { pmatrix} 1 & 0 \ 0 & 1 \ end {pmatrix}. $$ This relationship cannot be resolved, however, since the lower right element of the right-hand matrix differs from that of the left-hand matrix.
Remark: An inverse matrix exists if and only if: The rank of a $ n \ times n $ matrix Rg $ \ rm \ bf {A} $ equals $ n $, i.e. the number of linearly independent column vectors and at the same time the number of independent ones Line vectors. If the rows and columns of an existing $ m \ times n $ matrix $ \ rm \ bf {A} $ are swapped in such a way that a $ n \ times m $ matrix results, then the newly created $ n \ times m is $ -Matrix $ \ rm \ bf {A '} $ a matrix transposed to the output matrix $ \ rm \ bf {A} $.
Definition [transposed / overturned matrix] A transposed matrix (also called overturned matrix) is created by exchanging rows and columns of a matrix. For a matrix $ {\ rm \ bf A} $ the transposed or overturned matrix is ​​noted as $ {\ rm \ bf A ^ T} $. The following applies to the elements of the matrix $ \ bf A $: \ begin {equation *} (a_ {ij}) ^ T = (a_ {ji}). \ end {equation *}
Note: By swapping the indices, every row of the output matrix $ \ bf A $ becomes columns of the matrix $ \ bf A ^ T $ and the columns of $ \ bf A $ become rows of the matrix $ \ bf A ^ T $.

example


Transpose a square matrix \ begin {equation *} \ mathbf {A} = \ begin {pmatrix} 1 & 2 & 7 \ 8 & 3 & -5 \ 9 & 4 & 6 \ end {pmatrix} \ Longrightarrow \ mathbf {A ^ T} = \ begin {pmatrix} 1 & 8 & 9 \ 2 & 3 & 4 \ 7 & -5 & 6 \ end {pmatrix}. \ end {equation *}
In the case of square matrices, transposing corresponds to mirroring the elements on the main diagonal.
Example transposing a non-square matrix \ begin {equation *} \ mathbf {A} = \ begin {pmatrix} 1 & -2 \ 0 & 3 \ 5 & 7 \ end {pmatrix} \ Longrightarrow \ mathbf {A ^ T} = \ begin {pmatrix} 1 & 0 & 5 \ -2 & 3 & 7 \ end {pmatrix}. \ end {equation *}
Transpose the matrix!
Transpose the matrix!
Square matrices, which have the property that they are symmetrical with respect to their main diagonals, become symmetric matrices called. Symmetrical matrices are characterized by the fact that they correspond to their transposed matrix.
Definition [symmetric matrix]
A square matrix is ​​called $ \ rm \ bf A $ symmetrical, if the following applies to the transposed matrix: \ begin {equation *} {\ rm \ bf A ^ T = A}. \ end {equation *}

example


The transposed matrix $ \ bf A ^ T $ corresponds to the original matrix $ \ bf A $: \ begin {equation *} \ mathbf {A} = \ mathbf {A ^ T} = \ begin {pmatrix} 1 & 5 & 7 \ 5 & ​​2 & 3 \ 7 & 3 & 4 \ cr \ end {pmatrix}. \ end {equation *}