How to find the inverse of a matrix product. How to find the inverse of a matrix

A matrix A -1 is called an inverse matrix with respect to a matrix A if A * A -1 = E, where E is the n-th order unit matrix. An inverse matrix can only exist for square matrices.

Service purpose... With the help of this service online, you can find algebraic complements, transposed matrix A T, adjoint matrix and inverse matrix. The solution is carried out directly on the website (online) and is free of charge. The calculation results are presented in a Word format report and in Excel format (i.e. it is possible to check the solution). see design example.

Instruction. To obtain a solution, it is necessary to set the dimension of the matrix. Next, in a new dialog box, fill in the matrix A.

See also Inverse matrix using the Jordan-Gauss method

Algorithm for finding the inverse matrix

  1. Finding the transposed matrix A T.
  2. Definition of algebraic complements. Replace each element of the matrix with its algebraic complement.
  3. Drafting inverse matrix from algebraic additions: each element of the resulting matrix is ​​divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
Next inverse matrix algorithm is similar to the previous one, except for some steps: first, the algebraic complements are calculated, and then the adjoint matrix C is determined.
  1. Determine if the matrix is ​​square. If not, then there is no inverse matrix for it.
  2. Calculation of the determinant of the matrix A. If it is not equal to zero, we continue the solution; otherwise, the inverse matrix does not exist.
  3. Definition of algebraic complements.
  4. Filling the union (reciprocal, adjoint) matrix C.
  5. Composing an inverse matrix from algebraic complements: each element of the adjoint matrix C is divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
  6. A check is made: the original and the resulting matrices are multiplied. The result should be the identity matrix.

Example # 1. Let's write the matrix as follows:

Algebraic complements. ∆ 1,2 = - (2 4 - (- 2 (-2))) = -4 ∆ 2,1 = - (2 4-5 3) = 7 ∆ 2,3 = - (- 1 5 - (- 2 2)) = 1 ∆ 3.2 = - (- 1 (-2) -2 3) = 4
A -1 =
0,6 -0,4 0,8
0,7 0,2 0,1
-0,1 0,4 -0,3

Another algorithm for finding the inverse matrix

Let us give another scheme for finding the inverse matrix.
  1. Find the determinant of the given square matrix A.
  2. Find the algebraic complements to all elements of the matrix A.
  3. We write the algebraic complements of row elements into columns (transposition).
  4. We divide each element of the resulting matrix by the determinant of the matrix A.
As you can see, the transposition operation can be applied both at the beginning, over the original matrix, and at the end, over the obtained algebraic complements.

A special case: The inverse of the identity matrix E is the identity matrix E.

Let there be a square matrix of the nth order

The matrix A -1 is called inverse matrix with respect to the matrix A, if A * A -1 = E, where E is the identity matrix of the n-th order.

Unit Matrix- such a square matrix, in which all the elements along the main diagonal passing from the upper left corner to the lower right corner are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices with the same number of rows and columns.

The theorem on the condition for the existence of an inverse matrix

For a matrix to have an inverse matrix, it is necessary and sufficient that it be non-degenerate.

The matrix A = (A1, A2, ... A n) is called non-degenerate if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix be equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write matrix A in the table for solving systems of equations by the Gauss method and on the right (in place of the right-hand sides of the equations) assign matrix E.
  2. Using the Jordan transform, reduce the matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table we get the unit matrix E.
  4. Write the inverse matrix A -1, which is in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write down the matrix A and on the right we assign the identity matrix E. Using the Jordan transforms, we bring the matrix A to the identity matrix E. The calculations are shown in table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the unit matrix is ​​obtained. Therefore, the calculations are correct.

Answer:

Solving matrix equations

Matrix equations can be of the form:

AX = B, XA = B, AXB = C,

where A, B, C are the specified matrices, X is the required matrix.

Matrix equations are solved by multiplying the equation by its inverse matrices.

For example, to find a matrix from an equation, you multiply that equation by the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse of the matrix is ​​(see example 1)

Matrix method in economic analysis

Along with others, they also find application in matrix methods... These methods are based on linear and vector-matrix algebra. Such methods are used to analyze complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural units.

In the process of applying matrix methods of analysis, several stages can be distinguished.

At the first stage a system of economic indicators is formed and on its basis a matrix of initial data is compiled, which is a table in which the system numbers are shown on its separate lines (i = 1,2, ...., n), and along the vertical columns - the numbers of indicators (j = 1,2, ...., m).

In the second stage for each vertical column, the largest of the available values ​​of indicators is revealed, which is taken as a unit.

After that, all the amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is generated.

In the third stage all the constituent parts of the matrix are squared. If they have different significance, then each indicator of the matrix is ​​assigned a certain weighting factor k... The value of the latter is determined by expert judgment.

At the last one, fourth stage found values ratings R j are grouped in order of increasing or decreasing.

The above matrix methods should be used, for example, when comparative analysis various investment projects, as well as when evaluating other economic indicators of organizations.

Matrix Algebra - Inverse Matrix

inverse matrix

Inverse matrix is called a matrix that, when multiplied both on the right and on the left by a given matrix, gives the identity matrix.
Let us denote the matrix inverse to the matrix BUT through, then, according to the definition, we get:

where E Is the identity matrix.
Square matrix called non-special (non-degenerate) if its determinant is not zero. Otherwise, it is called special (degenerate) or singular.

The following theorem holds: every nonsingular matrix has an inverse.

The operation of finding the inverse matrix is ​​called appeal matrices. Consider the matrix inversion algorithm. Let there be given a nonsingular matrix n-th order:

where Δ = det A ≠ 0.

Algebraic Complement of an Element matrices n-th order BUT the determinant of the matrix ( n–1) th order obtained by deleting i-th line and j th column of the matrix BUT:

Let's compose the so-called attached matrix:

where are the algebraic complements of the corresponding elements of the matrix BUT.
Note that the algebraic complements of the elements of the rows of the matrix BUT are placed in the corresponding columns of the matrix à , that is, the matrix is ​​transposed at the same time.
By dividing all the elements of the matrix à by Δ - the value of the determinant of the matrix BUT, we get the inverse matrix as a result:

We note a number of special properties of the inverse matrix:
1) for a given matrix BUT its inverse matrix is the only one;
2) if there is an inverse matrix, then right reverse and left reverse matrices coincide with it;
3) a special (degenerate) square matrix has no inverse matrix.

The main properties of the inverse matrix:
1) the determinant of the inverse matrix and the determinant of the original matrix are reciprocal values;
2) the inverse matrix of the product of square matrices is equal to the product of inverse matrices of factors, taken in reverse order:

3) the transposed inverse of the matrix is ​​equal to the inverse of the given transposed matrix:

EXAMPLE Calculate the inverse of the given matrix.

This topic is one of the most hated among students. Only determinants are probably worse.

The trick is that the very concept of an inverse element (and I'm not just talking about matrices now) refers us to the operation of multiplication. Even in school curriculum multiplication is considered a complex operation, and matrix multiplication is generally a separate topic, which I have a whole paragraph and video tutorial devoted to.

We won't go into the details of matrix calculations today. Just remember: how matrices are denoted, how they are multiplied and what follows from this.

Repetition: matrix multiplication

First of all, let's agree on the notation. A matrix $ A $ of size $ \ left [m \ times n \ right] $ is simply a table of numbers, in which there are exactly $ m $ rows and $ n $ columns:

\ = \ underbrace (\ left [\ begin (matrix) ((a) _ (11)) & ((a) _ (12)) & ... & ((a) _ (1n)) \\ (( a) _ (21)) & ((a) _ (22)) & ... & ((a) _ (2n)) \\ ... & ... & ... & ... \\ ((a) _ (m1)) & ((a) _ (m2)) & ... & ((a) _ (mn)) \\\ end (matrix) \ right]) _ (n) \]

In order not to accidentally confuse rows and columns in places (believe me, you can mix up a 1 with a 2 in the exam - what can we say about some lines there), just take a look at the picture:

Determination of indices for matrix cells

What's happening? If we place the standard coordinate system $ OXY $ in the upper left corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with the coordinates $ \ left (x; y \ right) $ - this will be the row number and column number.

Why is the coordinate system located in the upper left corner? Because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $ x $ axis directed downwards and not to the right? Again, everything is simple: take the standard coordinate system (the $ x $ axis goes to the right, the $ y $ axis goes up) and rotate it so that it encloses the matrix. This is a 90 degree clockwise rotation - we can see its result in the picture.

In general, we figured out how to determine the indices of the matrix elements. Now let's deal with multiplication.

Definition. Matrices $ A = \ left [m \ times n \ right] $ and $ B = \ left [n \ times k \ right] $, when the number of columns in the first is the same as the number of rows in the second, are called consistent.

In that order. You can hesitate and say, they say, the matrices $ A $ and $ B $ form an ordered pair $ \ left (A; B \ right) $: if they are consistent in this order, then it is completely unnecessary that $ B $ and $ A $, those. the pair $ \ left (B; A \ right) $ is also matched.

Only matched matrices can be multiplied.

Definition. The product of matched matrices $ A = \ left [m \ times n \ right] $ and $ B = \ left [n \ times k \ right] $ is a new matrix $ C = \ left [m \ times k \ right] $ , whose elements $ ((c) _ (ij)) $ are calculated by the formula:

\ [((c) _ (ij)) = \ sum \ limits_ (k = 1) ^ (n) (((a) _ (ik))) \ cdot ((b) _ (kj)) \]

In other words: to get the element $ ((c) _ (ij)) $ of the matrix $ C = A \ cdot B $, you need to take the $ i $-row of the first matrix, the $ j $ -th column of the second matrix, and then multiply in pairs elements from this row and column. Add up the results.

Yes, that's such a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication, generally speaking, is non-commutative: $ A \ cdot B \ ne B \ cdot A $;
  2. However, multiplication is associative: $ \ left (A \ cdot B \ right) \ cdot C = A \ cdot \ left (B \ cdot C \ right) $;
  3. And even distributively: $ \ left (A + B \ right) \ cdot C = A \ cdot C + B \ cdot C $;
  4. And once again distributively: $ A \ cdot \ left (B + C \ right) = A \ cdot B + A \ cdot C $.

The distributivity of multiplication had to be described separately for the left and right multiplier-sum just because of the non-commutativity of the multiplication operation.

If, nevertheless, it turns out that $ A \ cdot B = B \ cdot A $, such matrices are called permutation matrices.

Among all the matrices that are multiplied there by something, there are special ones - those that, when multiplied by any matrix $ A $, again give $ A $:

Definition. A matrix $ E $ is called the identity matrix if $ A \ cdot E = A $ or $ E \ cdot A = A $. In the case of a square matrix $ A $, we can write:

The unit matrix is ​​a frequent guest when solving matrix equations. And in general, a frequent visitor to the world of matrices. :)

And also because of this $ E $, someone came up with all the game that will be written next.

What is inverse matrix

Since matrix multiplication is a very time consuming operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix is ​​also not the most trivial one. And requiring some explanation.

Key definition

Well, it's time to learn the truth.

Definition. The matrix $ B $ is called inverse to the matrix $ A $ if

The inverse matrix is ​​denoted by $ ((A) ^ (- 1)) $ (not to be confused with the degree!), So the definition can be rewritten as follows:

It would seem that everything is extremely simple and clear. But when analyzing such a definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that there is exactly one such matrix? What if for some initial matrix $ A $ there is a whole crowd of inverse ones?
  3. What do all these reverse look like? And how, in fact, are they to be counted?

As for the calculation algorithms - we will talk about this a little later. But we will answer the rest of the questions right now. Let us formulate them in the form of separate statements-lemmas.

Basic properties

Let's start with what the matrix $ A $ should look like in order for it to have $ ((A) ^ (- 1)) $. Now we will make sure that both of these matrices must be square, and of the same size: $ \ left [n \ times n \ right] $.

Lemma 1. Given a matrix $ A $ and its inverse $ ((A) ^ (- 1)) $. Then both of these matrices are square, with the same order $ n $.

Proof. It's simple. Let the matrix $ A = \ left [m \ times n \ right] $, $ ((A) ^ (- 1)) = \ left [a \ times b \ right] $. Since the product $ A \ cdot ((A) ^ (- 1)) = E $ exists by definition, the matrices $ A $ and $ ((A) ^ (- 1)) $ are consistent in the indicated order:

\ [\ begin (align) & \ left [m \ times n \ right] \ cdot \ left [a \ times b \ right] = \ left [m \ times b \ right] \\ & n = a \ end ( align) \]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $ n $ and $ a $ are "transitory" and must be equal.

At the same time, the inverse multiplication is also defined: $ ((A) ^ (- 1)) \ cdot A = E $, therefore the matrices $ ((A) ^ (- 1)) $ and $ A $ are also matched in the indicated order:

\ [\ begin (align) & \ left [a \ times b \ right] \ cdot \ left [m \ times n \ right] = \ left [a \ times n \ right] \\ & b = m \ end ( align) \]

Thus, without loss of generality, we can assume that $ A = \ left [m \ times n \ right] $, $ ((A) ^ (- 1)) = \ left [n \ times m \ right] $. However, according to the definition, $ A \ cdot ((A) ^ (- 1)) = ((A) ^ (- 1)) \ cdot A $, so the sizes of the matrices are strictly the same:

\ [\ begin (align) & \ left [m \ times n \ right] = \ left [n \ times m \ right] \\ & m = n \ end (align) \]

So it turns out that all three matrices - $ A $, $ ((A) ^ (- 1)) $ and $ E $ - are square sizes $ \ left [n \ times n \ right] $. The lemma is proved.

Well, that's not bad already. We see that only square matrices are invertible. Now let's make sure that the inverse is always the same.

Lemma 2. Given a matrix $ A $ and its inverse $ ((A) ^ (- 1)) $. Then this inverse is the only one.

Proof. Let's go from the contrary: let the matrix $ A $ have at least two copies of the inverse - $ B $ and $ C $. Then, according to the definition, the following equalities are true:

\ [\ begin (align) & A \ cdot B = B \ cdot A = E; \\ & A \ cdot C = C \ cdot A = E. \\ \ end (align) \]

From Lemma 1 we conclude that all four matrices - $ A $, $ B $, $ C $ and $ E $ - are square of the same order: $ \ left [n \ times n \ right] $. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), We can write:

\ [\ begin (align) & B \ cdot A \ cdot C = \ left (B \ cdot A \ right) \ cdot C = E \ cdot C = C; \\ & B \ cdot A \ cdot C = B \ cdot \ left (A \ cdot C \ right) = B \ cdot E = B; \\ & B \ cdot A \ cdot C = C = B \ Rightarrow B = C. \\ \ end (align) \]

Got the only thing possible variant: two instances of the inverse matrix are equal. The lemma is proved.

The above reasoning repeats almost word for word the proof of the uniqueness of the inverse for all real numbers $ b \ ne 0 $. The only essential addition is taking into account the dimension of the matrices.

However, we still do not know anything about whether any square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3. You are given a matrix $ A $. If its inverse matrix $ ((A) ^ (- 1)) $ exists, then the determinant of the original matrix is ​​nonzero:

\ [\ left | A \ right | \ ne 0 \]

Proof. We already know that $ A $ and $ ((A) ^ (- 1)) $ are square matrices of size $ \ left [n \ times n \ right] $. Therefore, for each of them, you can calculate the determinant: $ \ left | A \ right | $ and $ \ left | ((A) ^ (- 1)) \ right | $. However, the determinant of the product is equal to the product of the determinants:

\ [\ left | A \ cdot B \ right | = \ left | A \ right | \ cdot \ left | B \ right | \ Rightarrow \ left | A \ cdot ((A) ^ (- 1)) \ right | = \ left | A \ right | \ cdot \ left | ((A) ^ (- 1)) \ right | \]

But according to the definition $ A \ cdot ((A) ^ (- 1)) = E $, and the determinant of $ E $ is always 1, therefore

\ [\ begin (align) & A \ cdot ((A) ^ (- 1)) = E; \\ & \ left | A \ cdot ((A) ^ (- 1)) \ right | = \ left | E \ right |; \\ & \ left | A \ right | \ cdot \ left | ((A) ^ (- 1)) \ right | = 1. \\ \ end (align) \]

The product of two numbers is equal to one only if each of these numbers is nonzero:

\ [\ left | A \ right | \ ne 0; \ quad \ left | ((A) ^ (- 1)) \ right | \ ne 0. \]

So it turns out that $ \ left | A \ right | \ ne 0 $. The lemma is proved.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become quite clear why, with a zero determinant, no inverse matrix, in principle, can exist.

But first, let's formulate an "auxiliary" definition:

Definition. A degenerate matrix is ​​a square matrix of size $ \ left [n \ times n \ right] $, whose determinant is zero.

Thus, we can assert that every invertible matrix is ​​non-degenerate.

How to find the inverse of a matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be discussed now is very efficient for matrices of size $ \ left [2 \ times 2 \ right] $ and - partially - of size $ \ left [3 \ times 3 \ right] $. But starting from the size $ \ left [4 \ times 4 \ right] $ it is better not to use it. Why - now you yourself will understand everything.

Algebraic complements

Get ready. There will be pain now. No, do not worry: a beautiful nurse in a skirt, stockings with laces and will not give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty "Union Matrix" are coming to you.

Let's start with the main thing. Let there be a square matrix of size $ A = \ left [n \ times n \ right] $, whose elements are named $ ((a) _ (ij)) $. Then, for each such element, an algebraic complement can be defined:

Definition. Algebraic complement $ ((A) _ (ij)) $ to the element $ ((a) _ (ij)) $ located in the $ i $ -th row and $ j $ -th column of the matrix $ A = \ left [n \ times n \ right] $ is a construction of the form

\ [((A) _ (ij)) = ((\ left (-1 \ right)) ^ (i + j)) \ cdot M_ (ij) ^ (*) \]

Where $ M_ (ij) ^ (*) $ is the determinant of the matrix obtained from the original $ A $ by deleting the same $ i $ -th row and $ j $ -th column.

Again. The algebraic complement to the matrix element with coordinates $ \ left (i; j \ right) $ is denoted as $ ((A) _ (ij)) $ and is calculated according to the scheme:

  1. First, delete the $ i $ -line and the $ j $ -th column from the original matrix. We get a new square matrix, and we denote its determinant as $ M_ (ij) ^ (*) $.
  2. Then we multiply this determinant by $ ((\ left (-1 \ right)) ^ (i + j)) $ - at first this expression may seem brain-boring, but in fact we are just finding out the sign in front of $ M_ (ij) ^ (*) $.
  3. We count - we get a specific number. Those. the algebraic complement is exactly a number, not some new matrix, etc.

The matrix $ M_ (ij) ^ (*) $ itself is called the complementary minor to the element $ ((a) _ (ij)) $. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - what we considered in the lesson about the determinant.

Important note. Generally, in "adult" mathematics, algebraic additions are defined as follows:

  1. We take $ k $ rows and $ k $ columns in a square matrix. At their intersection, we get a matrix of size $ \ left [k \ times k \ right] $ - its determinant is called a minor of order $ k $ and is denoted by $ ((M) _ (k)) $.
  2. Then we delete these "favorites" $ k $ lines and $ k $ columns. Again, we get a square matrix - its determinant is called the complementary minor and is denoted $ M_ (k) ^ (*) $.
  3. Multiply $ M_ (k) ^ (*) $ by $ ((\ left (-1 \ right)) ^ (t)) $, where $ t $ is (now attention!) The sum of the numbers of all selected lines and columns ... This will be the algebraic addition.

Take a look at the third step: there is actually the sum of $ 2k $ terms! Another thing is that for $ k = 1 $ we get only 2 terms - these will be the same $ i + j $ - "coordinates" of the element $ ((a) _ (ij)) $, for which we are looking for an algebraic complement.

Thus, today we use a slightly simplified definition. But as we will see later, it will be more than enough. The next thing is much more important:

Definition. The adjoint matrix $ S $ to the square matrix $ A = \ left [n \ times n \ right] $ is a new matrix of size $ \ left [n \ times n \ right] $, which is obtained from $ A $ by replacing $ (( a) _ (ij)) $ with algebraic complements $ ((A) _ (ij)) $:

\\ Rightarrow S = \ left [\ begin (matrix) ((A) _ (11)) & ((A) _ (12)) & ... & ((A) _ (1n)) \\ (( A) _ (21)) & ((A) _ (22)) & ... & ((A) _ (2n)) \\ ... & ... & ... & ... \\ ((A) _ (n1)) & ((A) _ (n2)) & ... & ((A) _ (nn)) \\\ end (matrix) \ right] \]

The first thought that arises at the moment of realizing this definition is "this is how much you have to count!" Relax: you will have to count, but not so much. :)

Well, this is all very nice, but why is it necessary? Here's why.

The main theorem

Let's go back a little. Remember, in Lemma 3 it was stated that an invertible matrix $ A $ is always non-degenerate (that is, its determinant is nonzero: $ \ left | A \ right | \ ne 0 $).

So, the opposite is also true: if the matrix $ A $ is not degenerate, then it is always invertible. And there is even a search scheme $ ((A) ^ (- 1)) $. Check it out:

The inverse matrix theorem. Let a square matrix $ A = \ left [n \ times n \ right] $ be given, and its determinant is nonzero: $ \ left | A \ right | \ ne 0 $. Then the inverse matrix $ ((A) ^ (- 1)) $ exists and is calculated by the formula:

\ [((A) ^ (- 1)) = \ frac (1) (\ left | A \ right |) \ cdot ((S) ^ (T)) \]

And now - everything is the same, but in legible handwriting. To find the inverse of a matrix, you need:

  1. Calculate determinant $ \ left | A \ right | $ and make sure it is nonzero.
  2. Construct the union matrix $ S $, i.e. count 100500 algebraic complements $ ((A) _ (ij)) $ and place them in place of $ ((a) _ (ij)) $.
  3. Transpose this matrix $ S $, and then multiply it by some number $ q = (1) / (\ left | A \ right |) \; $.

And that's it! The inverse matrix $ ((A) ^ (- 1)) $ is found. Let's take a look at examples:

\ [\ left [\ begin (matrix) 3 & 1 \\ 5 & 2 \\\ end (matrix) \ right] \]

Solution. Let's check the reversibility. Let's calculate the determinant:

\ [\ left | A \ right | = \ left | \ begin (matrix) 3 & 1 \\ 5 & 2 \\\ end (matrix) \ right | = 3 \ cdot 2-1 \ cdot 5 = 6-5 = 1 \]

The determinant is nonzero. Hence, the matrix is ​​invertible. Let's compose the union matrix:

Let's count the algebraic additions:

\ [\ begin (align) & ((A) _ (11)) = ((\ left (-1 \ right)) ^ (1 + 1)) \ cdot \ left | 2 \ right | = 2; \\ & ((A) _ (12)) = ((\ left (-1 \ right)) ^ (1 + 2)) \ cdot \ left | 5 \ right | = -5; \\ & ((A) _ (21)) = ((\ left (-1 \ right)) ^ (2 + 1)) \ cdot \ left | 1 \ right | = -1; \\ & ((A) _ (22)) = ((\ left (-1 \ right)) ^ (2 + 2)) \ cdot \ left | 3 \ right | = 3. \\ \ end (align) \]

Please note: determinants | 2 |, | 5 |, | 1 | and | 3 | - these are the determinants of matrices of size $ \ left [1 \ times 1 \ right] $, not modules. Those. if the qualifiers included negative numbers, it is not necessary to remove the "minus".

In total, our union matrix looks like this:

\ [((A) ^ (- 1)) = \ frac (1) (\ left | A \ right |) \ cdot ((S) ^ (T)) = \ frac (1) (1) \ cdot ( (\ left [\ begin (array) (* (35) (r)) 2 & -5 \\ -1 & 3 \\\ end (array) \ right]) ^ (T)) = \ left [\ begin (array) (* (35) (r)) 2 & -1 \\ -5 & 3 \\\ end (array) \ right] \]

That's it. The problem has been solved.

Answer. $ \ left [\ begin (array) (* (35) (r)) 2 & -1 \\ -5 & 3 \\\ end (array) \ right] $

A task. Find the inverse of the matrix:

\ [\ left [\ begin (array) (* (35) (r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\ end (array) \ right] \]

Solution. Again we consider the determinant:

\ [\ begin (align) & \ left | \ begin (array) (* (35) (r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\ end (array) \ right | = \ begin (matrix ) \ left (1 \ cdot 2 \ cdot 1+ \ left (-1 \ right) \ cdot \ left (-1 \ right) \ cdot 1 + 2 \ cdot 0 \ cdot 0 \ right) - \\ - \ left (2 \ cdot 2 \ cdot 1+ \ left (-1 \ right) \ cdot 0 \ cdot 1 + 1 \ cdot \ left (-1 \ right) \ cdot 0 \ right) \\\ end (matrix) = \ \ & = \ left (2 + 1 + 0 \ right) - \ left (4 + 0 + 0 \ right) = - 1 \ ne 0. \\ \ end (align) \]

The determinant is nonzero - the matrix is ​​invertible. But now there will be the most tough: you have to count as many as 9 (nine, their mother!) Algebraic additions. And each of them will contain the qualifier $ \ left [2 \ times 2 \ right] $. Flew:

\ [\ begin (matrix) ((A) _ (11)) = ((\ left (-1 \ right)) ^ (1 + 1)) \ cdot \ left | \ begin (matrix) 2 & -1 \\ 0 & 1 \\\ end (matrix) \ right | = 2; \\ ((A) _ (12)) = ((\ left (-1 \ right)) ^ (1 + 2)) \ cdot \ left | \ begin (matrix) 0 & -1 \\ 1 & 1 \\\ end (matrix) \ right | = -1; \\ ((A) _ (13)) = ((\ left (-1 \ right)) ^ (1 + 3)) \ cdot \ left | \ begin (matrix) 0 & 2 \\ 1 & 0 \\\ end (matrix) \ right | = -2; \\ ... \\ ((A) _ (33)) = ((\ left (-1 \ right)) ^ (3 + 3)) \ cdot \ left | \ begin (matrix) 1 & -1 \\ 0 & 2 \\\ end (matrix) \ right | = 2; \\ \ end (matrix) \]

In short, the allied matrix will look like this:

Therefore, the inverse of the matrix will be like this:

\ [((A) ^ (- 1)) = \ frac (1) (- 1) \ cdot \ left [\ begin (matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\ end (matrix) \ right] = \ left [\ begin (array) (* (35) (r)) - 2 & -1 & 3 \\ 1 & 1 & -1 \ \ 2 & 1 & -2 \\\ end (array) \ right] \]

Well, that's all. Here is the answer.

Answer. $ \ left [\ begin (array) (* (35) (r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\ end (array) \ right ] $

As you can see, at the end of each example, we ran a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse - you should get $ E $.

It is much easier and faster to perform this check than to look for an error in further calculations, when, for example, you are solving a matrix equation.

Alternative way

As I said, the inverse matrix theorem works great for the sizes $ \ left [2 \ times 2 \ right] $ and $ \ left [3 \ times 3 \ right] $ (in the latter case- is not so "fine"), but for large matrices, sadness begins.

But do not worry: there is an alternative algorithm, with which you can calmly find the inverse even for the matrix $ \ left [10 \ times 10 \ right] $. But, as is often the case, to consider this algorithm, we need a little theoretical background.

Elementary transformations

Among the various transformations of the matrix, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $ i $ th row (column) and multiply it by any number $ k \ ne 0 $;
  2. Addition. Add to the $ i $ -th row (column) any other $ j $ -th row (column) multiplied by any number $ k \ ne 0 $ (you can, of course, $ k = 0 $, but what's the point ? Nothing will change though).
  3. Permutation. Take the $ i $ th and $ j $ th rows (columns) and swap them.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the attached matrix. Yes, yes: you heard right. Now there will be one more definition - the last one in today's lesson.

Attached matrix

Surely at school you solved systems of equations using the addition method. Well, there, subtract another from one string, multiply some string by a number - that's all.

So: now everything will be the same, but already "in an adult way." Ready?

Definition. Let the matrix $ A = \ left [n \ times n \ right] $ and the identity matrix $ E $ of the same size $ n $ be given. Then the adjoint matrix $ \ left [A \ left | E \ right. \ right] $ is a new $ \ left [n \ times 2n \ right] $ matrix that looks like this:

\ [\ left [A \ left | E \ right. \ right] = \ left [\ begin (array) (rrrr | rrrr) ((a) _ (11)) & ((a) _ (12)) & ... & ((a) _ (1n)) & 1 & 0 & ... & 0 \\ ((a) _ (21)) & ((a) _ (22)) & ... & ((a) _ (2n)) & 0 & 1 & ... & 0 \\ ... & ... & ... & ... & ... & ... & ... & ... \\ ((a) _ (n1)) & ((a) _ (n2)) & ... & ((a) _ (nn)) & 0 & 0 & ... & 1 \\\ end (array) \ right] \]

In short, we take the matrix $ A $, on the right we assign to it the identity matrix $ E $ of the required size, separate them with a vertical bar for beauty - here's the adjoined one. :)

What's the catch? Here's what:

Theorem. Let the matrix $ A $ be invertible. Consider the adjoint matrix $ \ left [A \ left | E \ right. \ right] $. If using elementary string conversions bring it to the form $ \ left [E \ left | B \ right. \ right] $, i.e. by multiplying, subtracting and rearranging the rows to get from $ A $ the matrix $ E $ on the right, then the matrix $ B $ obtained on the left is the inverse of $ A $:

\ [\ left [A \ left | E \ right. \ right] \ to \ left [E \ left | B \ right. \ right] \ Rightarrow B = ((A) ^ (- 1)) \]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the appended matrix $ \ left [A \ left | E \ right. \ right] $;
  2. Perform elementary string conversions until $ E $ appears instead of $ A $;
  3. Of course, something will also appear on the left - some matrix $ B $. This will be the opposite;
  4. PROFIT! :)

Of course, this is much easier said than done. So let's look at a couple of examples: for sizes $ \ left [3 \ times 3 \ right] $ and $ \ left [4 \ times 4 \ right] $.

A task. Find the inverse of the matrix:

\ [\ left [\ begin (array) (* (35) (r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\ end (array) \ right] \ ]

Solution. We compose the attached matrix:

\ [\ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\ end (array) \ right] \]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\ end (array) \ right] \ begin (matrix) \ downarrow \\ -1 \\ -1 \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\ end (array) \ right] \\ \ end (align) \]

There are no more ones, except for the first line. But we do not touch it, otherwise in the third column the newly removed units will begin to “multiply”.

But we can subtract the second line twice from the last - we get one in the lower left corner:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\ end (array) \ right] \ begin (matrix) \ \\ \ downarrow \\ -2 \\\ end (matrix) \ to \\ & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \\ \ end (align) \]

Now we can subtract the last line from the first and twice from the second - this way we will "zero" the first column:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \ begin (matrix) -1 \\ -2 \\ \ uparrow \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \\ \ end (align) \]

Multiply the second row by −1, then subtract it 6 times from the first and add it 1 time to the last:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \ begin (matrix) \ \\ \ left | \ cdot \ left (-1 \ right) \ right. \\ \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \ begin (matrix) -6 \\ \ updownarrow \\ +1 \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\ end (array) \ right] \\ \ end (align) \]

All that remains is to swap lines 1 and 3:

\ [\ left [\ begin (array) (rrr | rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\ end (array) \ right] \]

Ready! On the right is the desired inverse matrix.

Answer. $ \ left [\ begin (array) (* (35) (r)) 4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\ end (array) \ right ] $

A task. Find the inverse of the matrix:

\ [\ left [\ begin (matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\ end (matrix) \ right] \]

Solution. Again, we compose the attached:

\ [\ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\ end (array) \ right] \]

Let's get a little sleepy, grieve over how much we have to count now ... and start counting. First, we zero out the first column by subtracting row 1 from rows 2 and 3:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\ end (array) \ right] \ begin (matrix) \ downarrow \\ -1 \\ -1 \\ \ \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\ end (array) \ right] \\ \ end (align) \]

We see too many "cons" in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \ end (array) \ right] \ begin (matrix) \ \\ \ left | \ cdot \ left (-1 \ right) \ right. \\ \ left | \ cdot \ left (-1 \ right) \ right. \\ \ left | \ cdot \ left (-1 \ right) \ right. \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\ end (array) \ right] \ begin (matrix) -2 \\ -1 \\ \ updownarrow \\ -2 \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array) \ right] \\ \ end (align) \]

Now is the time to "fry" the last column of the original matrix: subtract row 4 from the rest:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array ) \ right] \ begin (matrix) +1 \\ -3 \\ -2 \\ \ uparrow \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array) \ right] \\ \ end (align) \]

Final Roll: Burn out the second column by subtracting row 2 from rows 1 and 3:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end ( array) \ right] \ begin (matrix) 6 \\ \ updownarrow \\ -5 \\ \ \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array) \ right] \\ \ end (align) \]

And again on the left is the identity matrix, which means the inverse is on the right. :)

Answer. $ \ left [\ begin (matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\ end (matrix) \ right] $

That's it. Check it yourself - scrap it for me. :)

For any nondegenerate matrix A, there exists and, moreover, a unique matrix A -1 such that

A * A -1 = A -1 * A = E,

where E is the identity matrix of the same orders as A. Matrix A -1 is called inverse to matrix A.

In case someone forgot, in the identity matrix, except for the diagonal filled with ones, all other positions are filled with zeros, an example of the identity matrix:

Finding the inverse matrix by the adjoint matrix method

The inverse matrix is ​​defined by the formula:

where A ij are elements a ij.

Those. to calculate the inverse matrix, you need to calculate the determinant of this matrix. Then find the algebraic complements for all its elements and compose a new matrix from them. Next, you need to transport this matrix. And divide each element of the new matrix by the determinant of the original matrix.

Let's look at a few examples.

Find A -1 for Matrix

Solution. Let us find A -1 by the adjoint matrix method. We have det A = 2. Let us find the algebraic complements of the elements of the matrix A. In this case, the algebraic complements of the elements of the matrix will be the corresponding elements of the matrix itself, taken with a sign in accordance with the formula

We have A 11 = 3, A 12 = -4, A 21 = -1, A 22 = 2. We form the adjoint matrix

We transport the matrix A *:

We find the inverse matrix by the formula:

We get:

Find A -1 using the adjoint matrix method if

Solution. First of all, we calculate the definition of the given matrix to make sure that the inverse matrix exists. We have

Here we have added to the elements of the second row the elements of the third row, multiplied previously by (-1), and then expanded the determinant on the second row. Since the given matrix is ​​determined to be nonzero, the inverse matrix exists. To construct the adjoint matrix, we find the algebraic complements of the elements of this matrix. We have

According to the formula

transport the matrix A *:

Then by the formula

Finding the inverse matrix by the method of elementary transformations

In addition to the method for finding the inverse matrix, which follows from the formula (the method of the adjoint matrix), there is a method for finding the inverse matrix, called the method of elementary transformations.

Elementary matrix transformations

The following transformations are called elementary matrix transformations:

1) permutation of rows (columns);

2) multiplying a row (column) by a number other than zero;

3) adding to the elements of a row (column) the corresponding elements of another row (column), previously multiplied by some number.

To find the matrix A -1, we construct rectangular matrix B = (A | E) of orders (n; 2n), assigning to the matrix A on the right the identity matrix E through the separating bar:

Let's look at an example.

Using the method of elementary transformations, find A -1 if

Solution. Let us form the matrix B:

Let us denote the rows of the matrix B by α 1, α 2, α 3. Let us perform the following transformations on the rows of the matrix B.

Share with your friends or save for yourself:

Loading...