The structure of the general solution of the slough. Homogeneous systems of linear equations


Solving systems of linear algebraic equations (SLAE) is undoubtedly the most important topic of the linear algebra course. A huge number of problems from all branches of mathematics are reduced to solving systems of linear equations. These factors explain the reason for creating this article. The material of the article is selected and structured so that with its help you can

  • choose the optimal method for solving your system of linear algebraic equations,
  • study the theory of the chosen method,
  • solve your system of linear equations, having considered in detail the solutions of typical examples and problems.

Brief description of the material of the article.

First, we give all the necessary definitions, concepts, and introduce some notation.

Next, we consider methods for solving systems of linear algebraic equations in which the number of equations is equal to the number of unknown variables and which have a unique solution. First, let's focus on the Cramer method, secondly, we will show the matrix method for solving such systems of equations, and thirdly, we will analyze the Gauss method (the method of successive elimination of unknown variables). To consolidate the theory, we will definitely solve several SLAEs in various ways.

After that, we turn to solving systems of linear algebraic equations of a general form, in which the number of equations does not coincide with the number of unknown variables or the main matrix of the system is degenerate. We formulate the Kronecker-Capelli theorem, which allows us to establish the compatibility of SLAEs. Let us analyze the solution of systems (in the case of their compatibility) using the concept of the basis minor of a matrix. We will also consider the Gauss method and describe in detail the solutions of the examples.

Be sure to dwell on the structure of the general solution of homogeneous and inhomogeneous systems of linear algebraic equations. Let us give the concept of a fundamental system of solutions and show how the general solution of the SLAE is written using the vectors of the fundamental system of solutions. For a better understanding, let's look at a few examples.

In conclusion, we consider systems of equations that are reduced to linear ones, as well as various problems, in the solution of which SLAEs arise.

Page navigation.

Definitions, concepts, designations.

We will consider systems of p linear algebraic equations with n unknown variables (p may be equal to n ) of the form

Unknown variables, - coefficients (some real or complex numbers), - free members (also real or complex numbers).

This form of SLAE is called coordinate.

AT matrix form this system of equations has the form ,
where - the main matrix of the system, - the matrix-column of unknown variables, - the matrix-column of free members.

If we add to the matrix A as the (n + 1)-th column the matrix-column of free terms, then we get the so-called expanded matrix systems of linear equations. Usually, the augmented matrix is ​​denoted by the letter T, and the column of free members is separated by a vertical line from the rest of the columns, that is,

By solving a system of linear algebraic equations called a set of values ​​of unknown variables , which turns all the equations of the system into identities. The matrix equation for the given values ​​of the unknown variables also turns into an identity.

If a system of equations has at least one solution, then it is called joint.

If the system of equations has no solutions, then it is called incompatible.

If a SLAE has a unique solution, then it is called certain; if there is more than one solution, then - uncertain.

If the free terms of all equations of the system are equal to zero , then the system is called homogeneous, otherwise - heterogeneous.

Solution of elementary systems of linear algebraic equations.

If the number of system equations is equal to the number of unknown variables and the determinant of its main matrix is ​​not equal to zero, then we will call such SLAEs elementary. Such systems of equations have a unique solution, and in the case of a homogeneous system, all unknown variables are equal to zero.

We started studying such SLAE in high school. When solving them, we took one equation, expressed one unknown variable in terms of others and substituted it into the remaining equations, then took the next equation, expressed the next unknown variable and substituted it into other equations, and so on. Or they used the addition method, that is, they added two or more equations to eliminate some unknown variables. We will not dwell on these methods in detail, since they are essentially modifications of the Gauss method.

The main methods for solving elementary systems of linear equations are the Cramer method, the matrix method and the Gauss method. Let's sort them out.

Solving systems of linear equations by Cramer's method.

Let us need to solve a system of linear algebraic equations

in which the number of equations is equal to the number of unknown variables and the determinant of the main matrix of the system is different from zero, that is, .

Let be the determinant of the main matrix of the system, and are determinants of matrices that are obtained from A by replacing 1st, 2nd, …, nth column respectively to the column of free members:

With such notation, the unknown variables are calculated by the formulas of Cramer's method as . This is how the solution of a system of linear algebraic equations is found by the Cramer method.

Example.

Cramer method .

Decision.

The main matrix of the system has the form . Calculate its determinant (if necessary, see the article):

Since the determinant of the main matrix of the system is nonzero, the system has a unique solution that can be found by Cramer's method.

Compose and calculate the necessary determinants (the determinant is obtained by replacing the first column in matrix A with a column of free members, the determinant - by replacing the second column with a column of free members, - by replacing the third column of matrix A with a column of free members):

Finding unknown variables using formulas :

Answer:

The main disadvantage of Cramer's method (if it can be called a disadvantage) is the complexity of calculating the determinants when the number of system equations is more than three.

Solving systems of linear algebraic equations by the matrix method (using the inverse matrix).

Let the system of linear algebraic equations be given in matrix form , where the matrix A has dimension n by n and its determinant is nonzero.

Since , then the matrix A is invertible, that is, there is an inverse matrix . If we multiply both parts of the equality by on the left, then we get a formula for finding the column matrix of unknown variables. So we got the solution of the system of linear algebraic equations by the matrix method.

Example.

Solve System of Linear Equations matrix method.

Decision.

Let's rewrite the system of equations in matrix form:

As

then the SLAE can be solved by the matrix method. Using the inverse matrix, the solution to this system can be found as .

Let's build an inverse matrix using a matrix of algebraic complements of the elements of matrix A (if necessary, see the article):

It remains to calculate - the matrix of unknown variables by multiplying the inverse matrix on the matrix-column of free members (if necessary, see the article):

Answer:

or in another notation x 1 = 4, x 2 = 0, x 3 = -1.

The main problem in finding solutions to systems of linear algebraic equations by the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than the third.

Solving systems of linear equations by the Gauss method.

Suppose we need to find a solution to a system of n linear equations with n unknown variables
the determinant of the main matrix of which is different from zero.

The essence of the Gauss method consists in the successive exclusion of unknown variables: first, x 1 is excluded from all equations of the system, starting from the second, then x 2 is excluded from all equations, starting from the third, and so on, until only the unknown variable x n remains in the last equation. Such a process of transforming the equations of the system for the successive elimination of unknown variables is called direct Gauss method. After the completion of the forward run of the Gaussian method, x n is found from the last equation, x n-1 is calculated from the penultimate equation using this value, and so on, x 1 is found from the first equation. The process of calculating unknown variables when moving from the last equation of the system to the first is called reverse Gauss method.

Let us briefly describe the algorithm for eliminating unknown variables.

We will assume that , since we can always achieve this by rearranging the equations of the system. We exclude the unknown variable x 1 from all equations of the system, starting from the second one. To do this, add the first equation multiplied by to the second equation of the system, add the first multiplied by to the third equation, and so on, add the first multiplied by to the nth equation. The system of equations after such transformations will take the form

where , a .

We would come to the same result if we expressed x 1 in terms of other unknown variables in the first equation of the system and substituted the resulting expression into all other equations. Thus, the variable x 1 is excluded from all equations, starting from the second.

Next, we act similarly, but only with a part of the resulting system, which is marked in the figure

To do this, add the second multiplied by to the third equation of the system, add the second multiplied by to the fourth equation, and so on, add the second multiplied by to the nth equation. The system of equations after such transformations will take the form

where , a . Thus, the variable x 2 is excluded from all equations, starting from the third.

Next, we proceed to the elimination of the unknown x 3, while acting similarly with the part of the system marked in the figure

So we continue the direct course of the Gauss method until the system takes the form

From this moment, we begin the reverse course of the Gauss method: we calculate x n from the last equation as , using the obtained value x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

Example.

Solve System of Linear Equations Gaussian method.

Decision.

Let's exclude the unknown variable x 1 from the second and third equations of the system. To do this, to both parts of the second and third equations, we add the corresponding parts of the first equation, multiplied by and by, respectively:

Now we exclude x 2 from the third equation by adding to its left and right parts the left and right parts of the second equation, multiplied by:

On this, the forward course of the Gauss method is completed, we begin the reverse course.

From the last equation of the resulting system of equations, we find x 3:

From the second equation we get .

From the first equation we find the remaining unknown variable and this completes the reverse course of the Gauss method.

Answer:

X 1 \u003d 4, x 2 \u003d 0, x 3 \u003d -1.

Solving systems of linear algebraic equations of general form.

In the general case, the number of equations of the system p does not coincide with the number of unknown variables n:

Such SLAEs may have no solutions, have a single solution, or have infinitely many solutions. This statement also applies to systems of equations whose main matrix is ​​square and degenerate.

Kronecker-Capelli theorem.

Before finding a solution to a system of linear equations, it is necessary to establish its compatibility. The answer to the question when SLAE is compatible, and when it is incompatible, gives Kronecker–Capelli theorem:
for a system of p equations with n unknowns (p can be equal to n ) to be consistent it is necessary and sufficient that the rank of the main matrix of the system is equal to the rank of the extended matrix, that is, Rank(A)=Rank(T) .

Let us consider the application of the Kronecker-Cappelli theorem for determining the compatibility of a system of linear equations as an example.

Example.

Find out if the system of linear equations has solutions.

Decision.

. Let us use the method of bordering minors. Minor of the second order different from zero. Let's go over the third-order minors surrounding it:

Since all bordering third-order minors are equal to zero, the rank of the main matrix is ​​two.

In turn, the rank of the augmented matrix is equal to three, since the minor of the third order

different from zero.

Thus, Rang(A) , therefore, according to the Kronecker-Capelli theorem, we can conclude that the original system of linear equations is inconsistent.

Answer:

There is no solution system.

So, we have learned to establish the inconsistency of the system using the Kronecker-Capelli theorem.

But how to find the solution of the SLAE if its compatibility is established?

To do this, we need the concept of the basis minor of a matrix and the theorem on the rank of a matrix.

The highest order minor of the matrix A, other than zero, is called basic.

It follows from the definition of the basis minor that its order is equal to the rank of the matrix. For a non-zero matrix A, there can be several basic minors; there is always one basic minor.

For example, consider the matrix .

All third-order minors of this matrix are equal to zero, since the elements of the third row of this matrix are the sum of the corresponding elements of the first and second rows.

The following minors of the second order are basic, since they are nonzero

Minors are not basic, since they are equal to zero.

Matrix rank theorem.

If the rank of a matrix of order p by n is r, then all elements of the rows (and columns) of the matrix that do not form the chosen basis minor are linearly expressed in terms of the corresponding elements of the rows (and columns) that form the basis minor.

What does the matrix rank theorem give us?

If, by the Kronecker-Capelli theorem, we have established the compatibility of the system, then we choose any basic minor of the main matrix of the system (its order is equal to r), and exclude from the system all equations that do not form the chosen basic minor. The SLAE obtained in this way will be equivalent to the original one, since the discarded equations are still redundant (according to the matrix rank theorem, they are a linear combination of the remaining equations).

As a result, after discarding the excessive equations of the system, two cases are possible.

    If the number of equations r in the resulting system is equal to the number of unknown variables, then it will be definite and the only solution can be found by the Cramer method, the matrix method or the Gauss method.

    Example.

    .

    Decision.

    Rank of the main matrix of the system is equal to two, since the minor of the second order different from zero. Extended matrix rank is also equal to two, since the only minor of the third order is equal to zero

    and the minor of the second order considered above is different from zero. Based on the Kronecker-Capelli theorem, one can assert the compatibility of the original system of linear equations, since Rank(A)=Rank(T)=2 .

    As a basis minor, we take . It is formed by the coefficients of the first and second equations:

    The third equation of the system does not participate in the formation of the basic minor, so we exclude it from the system based on the matrix rank theorem:

    Thus we have obtained an elementary system of linear algebraic equations. Let's solve it by Cramer's method:

    Answer:

    x 1 \u003d 1, x 2 \u003d 2.

    If the number of equations r in the resulting SLAE is less than the number of unknown variables n , then we leave the terms that form the basic minor in the left parts of the equations, and transfer the remaining terms to the right parts of the equations of the system with the opposite sign.

    The unknown variables (there are r of them) remaining on the left-hand sides of the equations are called main.

    Unknown variables (there are n - r of them) that ended up on the right side are called free.

    Now we assume that the free unknown variables can take arbitrary values, while the r main unknown variables will be expressed in terms of the free unknown variables in a unique way. Their expression can be found by solving the resulting SLAE by the Cramer method, the matrix method, or the Gauss method.

    Let's take an example.

    Example.

    Solve System of Linear Algebraic Equations .

    Decision.

    Find the rank of the main matrix of the system by the bordering minors method. Let us take a 1 1 = 1 as a non-zero first-order minor. Let's start searching for a non-zero second-order minor surrounding this minor:

    So we found a non-zero minor of the second order. Let's start searching for a non-zero bordering minor of the third order:

    Thus, the rank of the main matrix is ​​three. The rank of the augmented matrix is ​​also equal to three, that is, the system is consistent.

    The found non-zero minor of the third order will be taken as the basic one.

    For clarity, we show the elements that form the basis minor:

    We leave the terms participating in the basic minor on the left side of the equations of the system, and transfer the rest with opposite signs to the right sides:

    We give free unknown variables x 2 and x 5 arbitrary values, that is, we take , where are arbitrary numbers. In this case, the SLAE takes the form

    We solve the obtained elementary system of linear algebraic equations by the Cramer method:

    Hence, .

    In the answer, do not forget to indicate free unknown variables.

    Answer:

    Where are arbitrary numbers.

Summarize.

To solve a system of linear algebraic equations of a general form, we first find out its compatibility using the Kronecker-Capelli theorem. If the rank of the main matrix is ​​not equal to the rank of the extended matrix, then we conclude that the system is inconsistent.

If the rank of the main matrix is ​​equal to the rank of the extended matrix, then we choose the basic minor and discard the equations of the system that do not participate in the formation of the chosen basic minor.

If the order of the basis minor is equal to the number of unknown variables, then the SLAE has a unique solution, which can be found by any method known to us.

If the order of the basic minor is less than the number of unknown variables, then on the left side of the equations of the system we leave the terms with the main unknown variables, transfer the remaining terms to the right sides and assign arbitrary values ​​to the free unknown variables. From the resulting system of linear equations, we find the main unknown variables by the Cramer method, the matrix method or the Gauss method.

Gauss method for solving systems of linear algebraic equations of general form.

Using the Gauss method, one can solve systems of linear algebraic equations of any kind without their preliminary investigation for compatibility. The process of successive elimination of unknown variables makes it possible to draw a conclusion about both the compatibility and inconsistency of the SLAE, and if a solution exists, it makes it possible to find it.

From the point of view of computational work, the Gaussian method is preferable.

See its detailed description and analyzed examples in the article Gauss method for solving systems of linear algebraic equations of general form.

Recording the general solution of homogeneous and inhomogeneous linear algebraic systems using the vectors of the fundamental system of solutions.

In this section, we will focus on joint homogeneous and inhomogeneous systems of linear algebraic equations that have an infinite number of solutions.

Let's deal with homogeneous systems first.

Fundamental decision system A homogeneous system of p linear algebraic equations with n unknown variables is a set of (n – r) linearly independent solutions of this system, where r is the order of the basis minor of the main matrix of the system.

If we designate linearly independent solutions of a homogeneous SLAE as X (1) , X (2) , …, X (n-r) (X (1) , X (2) , …, X (n-r) are matrices columns of dimension n by 1 ) , then the general solution of this homogeneous system is represented as a linear combination of vectors of the fundamental system of solutions with arbitrary constant coefficients С 1 , С 2 , …, С (n-r), that is, .

What does the term general solution of a homogeneous system of linear algebraic equations (oroslau) mean?

The meaning is simple: the formula specifies all possible solutions to the original SLAE, in other words, taking any set of values ​​of arbitrary constants C 1 , C 2 , ..., C (n-r) , according to the formula we will get one of the solutions of the original homogeneous SLAE.

Thus, if we find a fundamental system of solutions, then we can set all solutions of this homogeneous SLAE as .

Let us show the process of constructing a fundamental system of solutions for a homogeneous SLAE.

We choose the basic minor of the original system of linear equations, exclude all other equations from the system, and transfer to the right-hand side of the equations of the system with opposite signs all terms containing free unknown variables. Let's give the free unknown variables the values ​​1,0,0,…,0 and calculate the main unknowns by solving the resulting elementary system of linear equations in any way, for example, by the Cramer method. Thus, X (1) will be obtained - the first solution of the fundamental system. If we give the free unknowns the values ​​0,1,0,0,…,0 and calculate the main unknowns, then we get X (2) . Etc. If we give the free unknown variables the values ​​0,0,…,0,1 and calculate the main unknowns, then we get X (n-r) . This is how the fundamental system of solutions of the homogeneous SLAE will be constructed and its general solution can be written in the form .

For inhomogeneous systems of linear algebraic equations, the general solution is represented as

Let's look at examples.

Example.

Find the fundamental system of solutions and the general solution of a homogeneous system of linear algebraic equations .

Decision.

The rank of the main matrix of homogeneous systems of linear equations is always equal to the rank of the extended matrix. Let us find the rank of the main matrix by the method of fringing minors. As a non-zero first-order minor, we take the element a 1 1 = 9 of the main matrix of the system. Find the bordering non-zero minor of the second order:

A minor of the second order, different from zero, is found. Let's go through the third-order minors bordering it in search of a non-zero one:

All bordering minors of the third order are equal to zero, therefore, the rank of the main and extended matrix is ​​two. Let's take the basic minor. For clarity, we note the elements of the system that form it:

The third equation of the original SLAE does not participate in the formation of the basic minor, therefore, it can be excluded:

We leave the terms containing the main unknowns on the right-hand sides of the equations, and transfer the terms with free unknowns to the right-hand sides:

Let us construct a fundamental system of solutions to the original homogeneous system of linear equations. The fundamental system of solutions of this SLAE consists of two solutions, since the original SLAE contains four unknown variables, and the order of its basic minor is two. To find X (1), we give the free unknown variables the values ​​x 2 \u003d 1, x 4 \u003d 0, then we find the main unknowns from the system of equations
.

Consider homogeneous system m linear equations with n variables:

(15)

The system of homogeneous linear equations is always compatible, because it always has a zero (trivial) solution (0,0,…,0).

If in system (15) m=n and , then the system has only a zero solution, which follows from the theorem and Cramer's formulas.

Theorem 1. The homogeneous system (15) has a nontrivial solution if and only if the rank of its matrix is ​​less than the number of variables, i.e. . r(A)< n.

Proof. The existence of a non-trivial solution to system (15) is equivalent to the linear dependence of the system matrix columns (i.e., there are such numbers x 1 , x 2 ,…,x n , not all equal to zero, that equalities (15) are valid).

According to the basic minor theorem, the columns of a matrix are linearly dependent , when not all columns of this matrix are basic, i.e.  when the order r of the basis minor of the matrix is ​​less than the number n of its columns. Ch.t.d.

Consequence. A square homogeneous system has non-trivial solutions  when |A|=0.

Theorem 2. If the columns x (1), x (2), ..., x (s) of the solution of the homogeneous system AX=0, then any linear combination of them is also a solution to this system.

Proof. Consider any combination of solutions:

Then AX=A()===0. h.t.d.

Consequence 1. If a homogeneous system has a non-trivial solution, then it has infinitely many solutions.

That. it is necessary to find such solutions x (1), x (2), ..., x (s) of the system Ax = 0, so that any other solution of this system can be represented as a linear combination of them and, moreover, in a unique way.

Definition. The system k=n-r (n is the number of unknowns in the system, r=rg A) of linearly independent solutions x (1) ,x (2) ,…,x (k) of the system Ax=0 is called fundamental decision system this system.

Theorem 3. Let a homogeneous system Ax=0 with n unknowns and r=rg A be given. Then there is a set of k=n-r solutions x (1) ,x (2) ,…,x (k) of this system that form the fundamental system of solutions.

Proof. Without loss of generality, we can assume that the basis minor of the matrix A is located in the upper left corner. Then, by the basis minor theorem, the remaining rows of the matrix A are linear combinations of the basis rows. This means that if the values ​​x 1 ,x 2 ,…,x n satisfy the first r equations i.e. equations corresponding to the rows of the basic minor), then they also satisfy other equations. Therefore, the solution set of the system will not change if all equations starting from the (r + 1)th are discarded. We get the system:

Let's move the free unknowns x r +1, x r +2 ,…,x n to the right side, and leave the basic ones x 1 , x 2 ,…, x r on the left side:

(16)

Because in this case, all b i =0, then instead of the formulas

c j =(M j (b i)-c r +1 M j (a i , r +1)-…-c n M j (a in)) j=1,2,…,r ((13), we get:

c j =-(c r +1 M j (a i , r +1)-…-c n M j (a in)) j=1,2,…,r (13)

If the free unknowns х r +1 ,х r +2 ,…,x n are given arbitrary values, then with respect to the basic unknowns we obtain a square SLAE with a nonsingular matrix that has a unique solution. Thus, any solution of a homogeneous SLAE is uniquely determined by the values ​​of the free unknowns х r +1 ,х r +2 ,…,x n . Consider the following k=n-r series of values ​​of free unknowns:

1, =0, ….,=0,

1, =0, ….,=0, (17)

………………………………………………

1, =0, ….,=0,

(The series number is indicated by a superscript in brackets, and the series of values ​​are written in columns. In each series =1 if i=j and =0 if ij.

i-th series of values ​​of free unknowns uniquely correspond to the values ​​,,…, of basic unknowns. The values ​​of the free and basic unknowns together give solutions to system (17).

Let us show that the columns e i =,i=1,2,…,k (18)

form a fundamental system of solutions.

Because these columns by construction are solutions of the homogeneous system Ax=0 and their number is equal to k, then it remains to prove the linear independence of solutions (16). Let there be a linear combination of solutions e 1 , e 2 ,…, e k(x (1) , x (2) ,…, x (k)), equal to column zero:

1 e 1 +  2 e 2 +…+  k e k ( 1 X (1) + 2 X(2) +…+ k X(k) = 0)

Then the left side of this equality is a column whose components with numbers r+1,r+2,…,n are equal to zero. But the (r+1)th component is equal to  1 1+ 2 0+…+ k 0= 1 . Similarly, the (r+2)-th component is equal to  2 ,…, the k-th component is equal to  k . Therefore  1 =  2 = …= k =0, which means the linear independence of solutions e 1 , e 2 ,…, e k ( x (1) , x (2) ,…, x (k)).

The constructed fundamental system of solutions (18) is called normal. By virtue of formula (13), it has the following form:

(20)

Consequence 2. Let be e 1 , e 2 ,…, e k-normal fundamental system of solutions of a homogeneous system, then the set of all solutions can be described by the formula:

x=c 1 e 1 + from 2 e 2 +…+с k e k (21)

where с 1 ,с 2 ,…,с k – take arbitrary values.

Proof. By Theorem 2, column (19) is a solution to the homogeneous system Ax=0. It remains to prove that any solution of this system can be represented in the form (17). Consider a column X=y r +1 e 1 +…+yn e k. This column coincides with the y column in terms of elements with numbers r+1,…,n and is the solution to (16). Therefore the columns X and at match, because solutions of system (16) are uniquely determined by the set of values ​​of its free unknowns x r +1 ,…,x n , and the columns at and X these sets match. Hence, at=X= y r +1 e 1 +…+yn e k, i.e. decision at is a linear combination of columns e 1 ,…,y n normal FSR. Ch.t.d.

The assertion proved is true not only for the normal FSR, but also for an arbitrary FSR of a homogeneous SLAE.

X=c 1 X 1 + c 2 X 2 +…+s n - r X n - r - common decision systems of linear homogeneous equations

Where Х 1 ,Х 2 ,…,Х n - r is any fundamental system of solutions,

c 1 ,c 2 ,…,с n - r are arbitrary numbers.

Example. (p. 78)

Let us establish a connection between the solutions of the inhomogeneous SLAE (1) and the corresponding homogeneous SLAE (15)

Theorem 4. The sum of any solution of an inhomogeneous system (1) and the corresponding homogeneous system (15) is a solution to system (1).

Proof. If c 1 ,…,c n is a solution to system (1), and d 1 ,…,d n is a solution to system (15), then substituting into any (for example, i-th) equation of system (1) in place of unknown numbers c 1 +d 1 ,…,c n +d n , we get:

B i +0=b i

Theorem 5. The difference of two arbitrary solutions of the inhomogeneous system (1) is the solution of the homogeneous system (15).

Proof. If c 1 ,…,c n and c 1 ,…,c n are solutions of system (1), then substituting into any (for example, i-th) equation of system (1) in place of unknown numbers c 1 -с 1 ,…,c n -с n , we get:

B i -b i \u003d 0 h.t.d.

It follows from the proved theorems that the general solution of a system of m linear homogeneous equations with n variables is equal to the sum of the general solution of the corresponding system of homogeneous linear equations (15) and an arbitrary number of particular solutions of this system (15).

X neod. =X total one +X frequent more than one (22)

As a particular solution to an inhomogeneous system, it is natural to take its solution, which is obtained if in the formulas c j =(M j (b i)-c r +1 M j (a i , r +1)-…-c n M j (a in)) j=1,2,…,r ((13) set equal to zero all numbers c r +1 ,…,c n , i.e.

Х 0 =(,…,,0,0,…,0) (23)

Adding this particular solution to the general solution X=c 1 X 1 + c 2 X 2 +…+s n - r X n - r corresponding homogeneous system, we obtain:

X neod. =X 0 +C 1 X 1 +C 2 X 2 +…+C n - r X n - r (24)

Consider a system of two equations with two variables:

in which at least one of the coefficients aij 0.

To solve, we exclude x 2 by multiplying the first equation by a 22, and the second by (-a 12) and adding them: Eliminate x 1 by multiplying the first equation by (-a 21), and the second by a 11 and adding them: Expression in brackets - determinant

Denoting ,, then the system will take the form:, i.e., if, then the system has a unique solution:,.

If Δ=0, a (or), then the system is inconsistent, because is reduced to the form If Δ=Δ 1 =Δ 2 =0, then the system is uncertain, because brought to mind

A homogeneous system is always consistent and has a trivial solution
. For a nontrivial solution to exist, it is necessary that the rank of the matrix was less than the number of unknowns:

.

Fundamental decision system homogeneous system
call the system of solutions in the form of column vectors
, which correspond to the canonical basis, i.e. basis in which arbitrary constants
are alternately set equal to one, while the rest are set to zero.

Then the general solution of the homogeneous system has the form:

where
are arbitrary constants. In other words, the general solution is a linear combination of the fundamental system of solutions.

Thus, the basic solutions can be obtained from the general solution if the free unknowns are alternately given the value of unity, assuming all others equal to zero.

Example. Let's find a solution to the system

We accept , then we get the solution in the form:

Let us now construct a fundamental system of solutions:

.

The general solution can be written as:

Solutions to a system of homogeneous linear equations have the following properties:

In other words, any linear combination of solutions to a homogeneous system is again a solution.

Solution of systems of linear equations by the Gauss method

Solving systems of linear equations has been of interest to mathematicians for several centuries. The first results were obtained in the XVIII century. In 1750, G. Kramer (1704–1752) published his works on the determinants of square matrices and proposed an algorithm for finding the inverse matrix. In 1809, Gauss outlined a new solution method known as the elimination method.

The Gauss method, or the method of successive elimination of unknowns, consists in the fact that, with the help of elementary transformations, the system of equations is reduced to an equivalent system of a stepped (or triangular) form. Such systems allow you to consistently find all the unknowns in a certain order.

Suppose that in system (1)
(which is always possible).

(1)

Multiplying the first equation in turn by the so-called suitable numbers

and adding the result of multiplication with the corresponding equations of the system, we get an equivalent system in which all equations, except for the first one, will have no unknown X 1

(2)

We now multiply the second equation of system (2) by appropriate numbers, assuming that

,

and adding it to the lower ones, we eliminate the variable of all equations, starting with the third.

Continuing this process, after
steps we get:

(3)

If at least one of the numbers
is not equal to zero, then the corresponding equality is inconsistent and system (1) is inconsistent. Conversely, for any joint number system
are equal to zero. Number is nothing but the rank of the system matrix (1).

The transition from system (1) to (3) is called in a straight line Gaussian method, and finding unknowns from (3) - backwards .

Comment : It is more convenient to perform transformations not with the equations themselves, but with the extended matrix of the system (1).

Example. Let's find a solution to the system

.

Let's write the augmented matrix of the system:

.

Let's add to the lines 2,3,4 the first, multiplied by (-2), (-3), (-2) respectively:

.

Let's swap rows 2 and 3, then in the resulting matrix add row 2 to row 4, multiplied by :

.

Add to line 4 line 3 multiplied by
:

.

It's obvious that
, hence the system is consistent. From the resulting system of equations

we find the solution by reverse substitution:

,
,
,
.

Example 2 Find system solution:

.

It is obvious that the system is inconsistent, because
, a
.

Advantages of the Gauss method :

    Less time consuming than Cramer's method.

    Unambiguously establishes the compatibility of the system and allows you to find a solution.

    Gives the ability to determine the rank of any matrices.

Homogeneous system of linear equations AX = 0 always together. It has non-trivial (non-zero) solutions if r= rank A< n .

For homogeneous systems, the basis variables (the coefficients at which form the basis minor) are expressed in terms of free variables by relations of the form:

Then n - r linearly independent vector solutions will be:

and any other solution is their linear combination. Decision-vector form a normalized fundamental system.

In a linear space, the set of solutions of a homogeneous system of linear equations forms a subspace of dimension n - r; is the basis of this subspace.

System m linear equations with n unknown(or, linear system

Here x 1 , x 2 , …, x n a 11 , a 12 , …, amn- system coefficients - and b 1 , b 2 , … b m aiji) and unknown ( j

System (1) is called homogeneousb 1 = b 2 = … = b m= 0), otherwise - heterogeneous.

System (1) is called square if the number m equations is equal to the number n unknown.

Decision systems (1) - set n numbers c 1 , c 2 , …, c n, such that the substitution of each c i instead of x i into system (1) turns all its equations into identities.

System (1) is called joint incompatible

Solutions c 1 (1) , c 2 (1) , …, c n(1) and c 1 (2) , c 2 (2) , …, c n various

c 1 (1) = c 1 (2) , c 2 (1) = c 2 (2) , …, c n (1) = c n (2) .

certain uncertain. If there are more equations than unknowns, it is called redefined.

Solving systems of linear equations

Solving Matrix Equations ~ Gauss Method

Methods for solving systems of linear equations are divided into two groups:

1. precise methods, which are finite algorithms for calculating the roots of a system (solving systems using an inverse matrix, Cramer's rule, the Gauss method, etc.),

2. iterative methods, which make it possible to obtain a solution of the system with a given accuracy by means of convergent iterative processes (the iteration method, the Seidel method, etc.).

Due to inevitable rounding, the results of even exact methods are approximate. When using iterative methods, moreover, the error of the method is added.

The effective application of iterative methods essentially depends on a good choice of the initial approximation and the speed of convergence of the process.

Solution of matrix equations

Consider the system n linear algebraic equations with respect to n unknown X 1 , X 2 , …, x n:

. (15)

Matrix BUT, whose columns are the coefficients for the corresponding unknowns, and the rows are the coefficients for the unknowns in the corresponding equation, is called system matrix; column matrix b, whose elements are the right-hand sides of the equations of the system, is called right side matrix or simply right side of the system. column matrix X, whose elements are unknown unknowns, is called system solution.

If the matrix BUT- non-singular, that is, det A n e is equal to 0, then system (13), or its equivalent matrix equation (14), has a unique solution.

Indeed, under the condition det A is not equal 0 there is an inverse matrix BUT-one . Multiplying both sides of equation (14) by the matrix BUT-1 we get:

(16)

Formula (16) gives a solution to equation (14) and it is unique.

It is convenient to solve systems of linear equations using the function lsolve.

lsolve( A, b)

The decision vector is returned x such that Oh= b.

Arguments:

BUT is a square, non-singular matrix.

b is a vector that has as many rows as there are rows in the matrix BUT .

Figure 8 shows the solution of a system of three linear equations in three unknowns.

Gauss method

The Gaussian method, also called the Gaussian elimination method, consists in the fact that the system (13) is reduced by successive elimination of unknowns to an equivalent system with a triangular matrix:

In matrix notation, this means that first (the direct course of the Gauss method) elementary operations on rows bring the augmented matrix of the system to a step form:

and then (the reverse course of the Gaussian method) this step matrix is ​​transformed so that in the first n columns turned out to be an identity matrix:

.

Last, ( n+ 1) the column of this matrix contains the solution of system (13).

In Mathcad, the forward and backward moves of the Gaussian method are performed by the function ref(A).

Figure 9 shows the solution of a system of linear equations using the Gaussian method, which uses the following functions:

rref( A)

Returns the step form of the matrix BUT.

augment( A, AT)

Returns an array formed by the location A and AT side by side. Arrays A and AT must have the same number of lines.

submatrix( A, ir, jr, ic, jc)

A sub-matrix is ​​returned, consisting of all elements with ir on jr and columns with ic on jc. Make sure that ir jr and

ic jc, otherwise the order of rows and/or columns will be reversed.

Figure 9

Description of the method

For a system of n linear equations with n unknowns (over an arbitrary field)

with system matrix determinant Δ different from zero, the solution is written as

(the i-th column of the system matrix is ​​replaced by a column of free members).
In another form, Cramer's rule is formulated as follows: for any coefficients c1, c2, ..., cn, the equality is true:

In this form, Cramer's formula is valid without the assumption that Δ is different from zero, it is not even necessary that the coefficients of the system be elements of an integral ring (the determinant of the system can even be a zero divisor in the ring of coefficients). We can also assume that either the sets b1,b2,...,bn and x1,x2,...,xn, or the set c1,c2,...,cn, do not consist of elements of the coefficient ring of the system, but of some module over this ring. In this form, Cramer's formula is used, for example, in proving the formula for Gram's determinant and Nakayama's Lemma.

35) Kronecker-Capelli theorem
For a system of m inhomogeneous linear equations in n unknowns to be consistent, it is necessary and sufficient that Proof of necessity. Let system (1.13) be consistent, that is, there are such numbers X 1 =α 1 , X 2 =α 2 , …, x n \u003d α n, what (1.15) Subtract from the last column of the extended matrix its first column multiplied by α 1 , the second - by α 2 , …, the nth - multiplied by α n , that is, from the last column of the matrix (1.14) one should subtract the left parts of the equalities ( 1.15). Then we get the matrix whose rank does not change as a result of elementary transformations and . But it is obvious, and hence the proof of sufficiency. Let and let, for definiteness, a non-zero minor of order r located in the upper left corner of the matrix: This means that the remaining rows of the matrix can be obtained as linear combinations of the first r rows, that is, m-r rows of the matrix can be represented as the sums of the first r rows multiplied by some numbers. But then the first r equations of the system (1.13) are independent, and the rest are their consequences, that is, the solution of the system of the first r equations is automatically the solution of the remaining equations. Two cases are possible. 1. r=n. Then the system consisting of the first r equations has the same number of equations and unknowns and is consistent, and its solution is unique. 2.r (1.16) "Free" unknowns x r +1 , x r+2 , …, x n can be given any value. Then the corresponding values ​​get unknown x 1 , x 2 , …, x r . System (1.13) is also consistent in this case, but indefinite. Comment. Non-zero minor of order r, where r X 1 , X 2 , …, X r are also called basic, the rest are free. System (1.16) is called truncated. If the free unknowns are denoted x r +1 =c 1 , x r +2 =c 2 , …, x n \u003d c n - r, then the basic unknowns will depend on them, that is, the solution of the system of m equations with n unknowns will have the form X = ( x 1 (c 1 , …, c n - r), x 2 (c 1 , …, c n - r), …, x r(c 1 , …, c n - r), c 1 , c 2 , …, c n - r) T , where the symbol T means transposition. Such a solution of the system is called general.

36) us-e certainty, uncertainty
System m linear equations with n unknown(or, linear system) in linear algebra is a system of equations of the form

Here x 1 , x 2 , …, x n are unknowns to be determined. a 11 , a 12 , …, amn- system coefficients - and b 1 , b 2 , … b m- free members - are assumed to be known. Coefficient indices ( aij) systems denote the numbers of the equation ( i) and unknown ( j), at which this coefficient stands, respectively.

System (1) is called homogeneous if all its free terms are equal to zero ( b 1 = b 2 = … = b m= 0), otherwise - heterogeneous.

System (1) is called joint if it has at least one solution, and incompatible if it has no solution.

A joint system of the form (1) may have one or more solutions.

Solutions c 1 (1) , c 2 (1) , …, c n(1) and c 1 (2) , c 2 (2) , …, c n(2) joint systems of the form (1) are called various if at least one of the equalities is violated:

c 1 (1) = c 1 (2) , c 2 (1) = c 2 (2) , …, c n (1) = c n (2) .

A joint system of the form (1) is called certain if it has a unique solution; if it has at least two different solutions, then it is called uncertain

37) Solving systems of linear equations by the Gauss method

Let the original system look like this

Matrix A is called the main matrix of the system, b- a column of free members.

Then, according to the property of elementary transformations over rows, the main matrix of this system can be reduced to a stepped form (the same transformations must be applied to the column of free members):

Then the variables are called main variables. All others are called free.

[edit] Consistency condition

The above condition for all can be formulated as a necessary and sufficient condition for compatibility:

Recall that the rank of a joint system is the rank of its main matrix (or extended, since they are equal).

Algorithm

Description

The algorithm for solving SLAE by the Gaussian method is divided into two stages.

§ At the first stage, the so-called direct move is carried out, when, by means of elementary transformations over rows, the system is brought to a step or triangular form, or it is established that the system is inconsistent. Namely, among the elements of the first column of the matrix, a non-zero one is chosen, it is moved to the uppermost position by permuting the rows, and the first row obtained after the permutation is subtracted from the remaining rows, multiplying it by a value equal to the ratio of the first element of each of these rows to the first element of the first row, zeroing thus the column below it. After the indicated transformations have been made, the first row and the first column are mentally crossed out and continue until a zero-size matrix remains. If at some of the iterations among the elements of the first column there was not found a non-zero one, then go to the next column and perform a similar operation.

§ At the second stage, the so-called reverse move is carried out, the essence of which is to express all the resulting basic variables in terms of non-basic ones and construct a fundamental system of solutions, or, if all variables are basic, then numerically express the only solution of the system of linear equations. This procedure begins with the last equation, from which the corresponding basic variable is expressed (and there is only one there) and substituted into the previous equations, and so on, going up the “steps”. Each line corresponds to exactly one basic variable, so at each step, except for the last (topmost), the situation exactly repeats the case of the last line.

Gauss method requires order O(n 3) actions.

This method relies on:

38)The Kronecker-Capelli theorem.
A system is consistent if and only if the rank of its main matrix is ​​equal to the rank of its extended matrix.

You can order a detailed solution to your problem !!!

To understand what is fundamental decision system you can watch the video tutorial for the same example by clicking . Now let's move on to the description of all the necessary work. This will help you understand the essence of this issue in more detail.

How to find the fundamental system of solutions of a linear equation?

Take for example the following system of linear equations:

Let's find a solution to this linear system of equations. To begin with, we write down the coefficient matrix of the system.

Let's transform this matrix to a triangular one. We rewrite the first line without changes. And all the elements that are under $a_(11)$ must be made zero. To make a zero in the place of the element $a_(21)$, you need to subtract the first from the second line, and write the difference in the second line. To make a zero in the place of the element $a_(31)$, you need to subtract the first from the third row and write the difference in the third row. To make a zero in place of the element $a_(41)$, you need to subtract the first multiplied by 2 from the fourth line and write the difference in the fourth line. To make a zero in place of the element $a_(31)$, subtract the first multiplied by 2 from the fifth line and write the difference in the fifth line.

We rewrite the first and second lines without changes. And all the elements that are under $a_(22)$ must be made zero. To make a zero in the place of the element $a_(32)$, it is necessary to subtract the second multiplied by 2 from the third row and write the difference in the third row. To make a zero in the place of the element $a_(42)$, it is necessary to subtract the second multiplied by 2 from the fourth line and write the difference in the fourth line. To make a zero in place of the element $a_(52)$, subtract the second multiplied by 3 from the fifth line and write the difference in the fifth line.

We see that the last three lines are the same, so if you subtract the third from the fourth and fifth, then they will become zero.

For this matrix write down a new system of equations.

We see that we have only three linearly independent equations, and five unknowns, so the fundamental system of solutions will consist of two vectors. So we move the last two unknowns to the right.

Now, we begin to express those unknowns that are on the left side through those that are on the right side. We start with the last equation, first we express $x_3$, then we substitute the result obtained into the second equation and express $x_2$, and then into the first equation and here we express $x_1$. Thus, we expressed all the unknowns that are on the left side through the unknowns that are on the right side.

After that, instead of $x_4$ and $x_5$, you can substitute any numbers and find $x_1$, $x_2$ and $x_3$. Each such five numbers will be the roots of our original system of equations. To find the vectors that are included in FSR we need to substitute 1 instead of $x_4$, and substitute 0 instead of $x_5$, find $x_1$, $x_2$ and $x_3$, and then vice versa $x_4=0$ and $x_5=1$.

Share with friends or save for yourself:

Loading...