On the solution of degenerate and ill-conditioned systems of linear algebraic equations. Solving ill-conditioned systems of linear algebraic equations Solving nonlinear equations and systems of nonlinear equations

8.2.3. Degenerate and ill-conditioned systems

Let us return again to the SLAE Ax=b with a square matrix A of size MxN, which, in contrast to the “good” case considered above (see Section 8. D), requires a special approach. Let us pay attention to two similar types of SLAE:

  • degenerate system (with zero determinant |A|=0);
  • poorly conditioned system (the determinant A is not equal to zero, but the condition number is very large).

Despite the fact that these types of systems of equations differ significantly from each other (for the first there is no solution, but for the second there is only one), from the practical point of view of the computer, there is much in common between them.

Degenerate SLAEs

A degenerate system is a system described by a matrix with zero determinant |A|=0 (singular matrix). Since some equations included in such a system are represented by a linear combination of other equations, then in fact the system itself is underdetermined. It is easy to realize that, depending on the specific type of the right-hand side vector b, there is either an infinite number of solutions or none at all. The first option comes down to constructing a normal pseudo-solution (i.e., choosing from an infinite set of solutions the one that is closest to a certain, for example, zero, vector). This case was discussed in detail in section. 8.2.2 (see listings 8.11-8.13).

Rice. 8.7. Graphical representation of an inconsistent system of two equations with a singular matrix

Let's consider the second case, when the SLAE Aх=b with a singular square matrix A does not have a single solution. An example of such a problem (for a system of two equations) is illustrated in Fig. 8.7, in the upper part of which the matrix A and the vector b are introduced, and an attempt is made (unsuccessful, since the matrix A is singular) to solve the system using the isolve function. The graph that occupies the main part of the figure shows that the two equations defining the system define two parallel lines on the plane (x0,xi). The lines do not intersect at any point in the coordinate plane, and, accordingly, there is no solution to the system.

NOTE

First, note that an SLAE defined by a nonsingular square matrix of size 2x2 defines a pair of intersecting lines in the plane (see Figure 8.9 below). Secondly, it is worth saying that if the system were consistent, then the geometric representation of the equations would be two coinciding lines describing an infinite number of solutions.

Rice. 8.8. Graph of sections of the residual function f (x) = |Ax-b|

It is easy to guess that in the considered singular case of pseudo-solutions of the system that minimize the discrepancy |Ax-b| , there will be infinitely many, and they will lie on the third straight line, parallel to the two shown in Fig. 8.7 and located in the middle between them. This is illustrated in Fig. 8.8, which shows several sections of the function f(x) = = | Ax-b | , which indicate the presence of a family of minima of the same depth. If you try to use the built-in function Minimize to find them, its numerical method will always find any one point of the mentioned line (depending on the initial conditions). Therefore, to determine a unique solution, one should choose from the entire set of pseudo-solutions the one that has the smallest norm. You can try to formulate this multidimensional minimization problem in Mathcad using combinations of built-in Minimize functions, but a more efficient way would be to use regularization (see below) or orthogonal matrix decompositions (see Section 8.3).

Ill-conditioned systems

A poorly conditioned system is a system in which the determinant A is not equal to zero, but the condition number |A -1 | |A| very large. Despite the fact that ill-conditioned systems have a unique solution, in practice it often makes no sense to look for this solution. Let's consider the properties of ill-conditioned SLAEs using two specific examples (Listing 8.14).

Listing 8.14. Solution of two close ill-conditioned SLAEs

Each line of Listing 8.14 contains the solution of two very close ill-conditioned SLAEs (with the same right-hand side b and slightly different matrices A). Despite their proximity, the exact solutions of these systems turn out to be very far from each other. It should be noted that for a system of two equations an exact solution is easy to obtain, but when solving a high-dimensional SLAE, minor rounding errors that inevitably accumulate during calculations (including by the “exact” Gauss algorithm) lead to huge errors in the result. The question arises: does it make sense to look for a numerical solution if it is known in advance that, due to the instability of the problem itself, it may turn out to be completely incorrect?

Another consideration that forces us to look for special methods for solving ill-conditioned SLAEs (even the system of two equations given as an example in Listing 8.14) is associated with their physical interpretation as experimental results. If it is initially known that the input data was obtained with some error, then solving ill-conditioned systems makes no sense at all, since small errors in the model (matrix A and vector b) lead to large errors in the solution. Problems with such properties are called ill-posed.

To better understand the reason for the incorrectness, it is useful to compare the graphical interpretation of a good (Figure 8.9) and a poorly (Figure 8.10) conditioned system of two equations. The solution to the system is visualized by the point of intersection of two straight lines representing each of the equations.

Rice. 8.9. Graph of a well-conditioned system of two equations

Rice. 8.10. Graph of an ill-conditioned system of two equations

From Fig. 8.10 it is clear that the straight lines corresponding to the ill-conditioned SLAE are located in close proximity to each other (almost parallel). In this regard, small errors in the location of each of the lines can lead to significant errors in localizing the point of their intersection (solutions of the SLAE), as opposed to the case of a well-conditioned system, when small errors in the slope of the lines have little effect on the location of their intersection point (Fig. 8.9).

NOTE

Poor conditioning of the matrix is ​​also typical when reconstructing experimental data specified by overdetermined (inconsistent) SLAEs (for example, in tomography problems). This is the case illustrated in the next section (see Listing 8.16 below).

Regularization method

To solve ill-posed problems, in particular, degenerate and ill-conditioned SLAEs, a very effective technique called regularization has been developed. It is based on taking into account additional a priori information about the structure of the solution (the vector of the a priori estimate xo), which is very often present in practical cases. Due to the fact that regularization was discussed in detail in Sect. 6.3.3, we only recall that the problem of solving the SLAE Ax=b can be replaced by the problem of finding the minimum of the Tikhonov functional:

Ω (x,λ) = |Ax-b| 2 +λ |x-x0| 2. (8.3)

Here R, is a small positive regularization parameter, which must be selected in some optimal way. It can be shown that the problem of minimizing the Tikhonov functional can, in turn, be reduced to solving another SLAE:

(A T A+ λ I)-x=A T B+λ x0, (8.4)

which at λ ->0 goes into the original ill-conditioned system, and for large x, being well-conditioned, it has a solution x 0. Obviously, some intermediate value of A will be optimal, establishing a certain compromise between acceptable conditionality and proximity to the original problem. Note that the regularization approach reduces the ill-posed problem to the conditionally well-posed (according to Tikhonov) problem of finding a solution to system (8.4), which, due to the linearity of the problem, is unique and stable.

Let us present, without unnecessary comments, a regularized solution of the degenerate system, which was presented in Fig. 8.8. Listing 8.15 demonstrates finding a solution to problem (8.4), and the resulting dependence of the residual and the solution itself on the regularization parameter R is shown in Fig. 8.11 and 8.12 respectively. It is important to emphasize that the solutions of the original system and, therefore, system (8.4) at λ =0 does not exist.

Listing 8.15 Regularization of a degenerate SLAE

The final step of regularization is choosing the optimal λ . There are at least two considerations based on which one can choose a regularization parameter based on the dependence of the residual on it. In the example under consideration, we apply the criterion for determining λ , based on the selection of the residual norm equal to the a priori estimate of the errors in specifying the input data: matrix A and vector b, i.e. the value | δA | + |5λ|. For example, you can select the residual norm and, accordingly, the parameter λ and solution x( λ ), which are marked in Fig. 8.11 and 8.12 dotted lines.

NOTE 1

Another choice λ , which does not require any a priori considerations regarding model errors, is the so-called quasi-optimal method, discussed in Section. 6.3.3.

NOTE 2

It is useful to verify that formula (8.4) in the case of a linear problem gives the same result as the general formula (8.3). To do this, it is enough to change the last line in Listing 8.15, which expresses the solution of the SLAE (8.4), to code that implements minimization by a numerical method, as shown in Listing 8.16. Calculations (which require significantly more computer time) give the same result as shown in Fig. 8.11 and 8.12.

NOTE 3

Try in the calculations of Listings 8.15 and 8.16 to take a different, for example, more realistic, a priori estimate of the solution (instead of the zero vector x0 used in them) and see how this affects the result.

Rice. 8.11. Dependence of the residual of the regularized solution of a degenerate SLAE on parameter A. (continuation of Listing 8.15)

NOTE 4

It is also interesting to use another dependence instead of formula (8.3) as the Tikhonov functional: Ω(x,λ ) = |Ax-b|+ λ |x-x0 | . You will find a corresponding example of calculations on the CD.

Rice. 8.12. Regularized solution depending on λ (continued from Listing 8.15)

Listing 8.16. Regularization of SLAEs using a minimization algorithm (continuation of Listing 8.15)


Required vector

If , then system (1) is called ill-conditioned. In this case, errors in the matrix coefficients and right-hand sides or rounding errors in calculations can greatly distort the solution.

When solving many problems, the right-hand side of system (1) and the coefficients of matrix A are known approximately. In this case, instead of the exact system (1) we have some other system

such that

We assume that the values ​​of and d are known.

Since instead of system (1) we have system (2), we can only find an approximate solution to system (1). The method for constructing an approximate solution of system (1) must be stable to small changes in the initial data.

A pseudosolution of system (1) is a vector that minimizes the discrepancy over the entire space.

Let x 1 be some fixed vector from , usually determined by the problem statement.

A solution of system (1) normal with respect to the vector x 1 is a pseudo-solution x 0 with a minimum norm, that is

where F is the set of all pseudo-solutions of system (1).

Moreover

where ¾ are the components of the vector x.

For any system of type (1), a normal solution exists and is unique. The problem of finding a normal solution to an ill-conditioned system (1) is ill-posed.

To find an approximate normal solution to system (1), we use the regularization method.

According to this method, we construct a smoothing functional of the form

and find the vector that minimizes this functional. Moreover, the regularization parameter a is uniquely determined from the condition

Where .

Degenerate and ill-conditioned systems may be indistinguishable within a given accuracy. But if there is information about the solvability of system (1), then instead of condition (5) the following condition should be used:

Components vectors are solutions to a system of linear algebraic equations, which is obtained from the condition for the minimum of the functional (4)

and looks like

where E is the identity matrix,

¾Hermitian conjugate matrix.

In practice, choosing a vector requires additional considerations. If they are not present, then assume =0.

For =0, we write system (7) in the form

Where

The found vector will be an approximate normal solution of system (1).

Let's focus on choosing parameter a. If a=0, then system (7) turns into an ill-conditioned system. If a is large, then system (7) will be well-conditioned, but the regularized solution will not be close to the desired solution to system (1). Therefore, too large or too small a is not suitable.

Usually in practice, calculations are carried out with a number of values ​​of the parameter a. For example,

For each value of a, find the element that minimizes the functional (4). The desired value of the regularization parameter is taken to be the number a for which equality (5) or (6) is satisfied with the required accuracy.

III. EXERCISE

1. Construct a system of linear algebraic equations, consisting of three equations with three unknowns, with a determinant whose value is of the order of 10 -6.

2. Construct a second system, similar to the first, but having other free terms that differ from the free terms of the first system by 0.00006.

3. Solve the constructed systems using the regularization method (assuming =0 and d=10 -4) and some other method (for example, the Gaussian method).

4. Compare the results obtained and draw conclusions about the applicability of the methods used.

IV. FORMULATION OF THE REPORT

The report must present:

1. Title of the work.

2. Statement of the problem.

3. Description of the solution algorithm (method).

4. Text of the program with a description.

5. Results of the program.

BIBLIOGRAPHICAL LIST

1. Tikhonov A.N., Arsenin V.Ya. Methods for solving ill-posed problems. - M.: Nauka, 1979. 286 p.

2. Bakhvalov N.S., Zhidkov N.P., Kobelkov G.M. Numerical methods. - M.: BINOM. Laboratory of Knowledge, 2007 636 pp.


Laboratory work No. 23

Collection output:

SOLUTION OF ILL-CONDITIONED SPARSE SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS USING KRYLOV SUBSPACE

Guseva Yulia Sergeevna

student at Samara State Aerospace University named after S.P. Queen,

Samara

E-mail:

Gogoleva Sofya Yurievna

Associate Professor of Samara State Aerospace University named after S.P. Queen,

Samara

E-mail:

Introduction

Mathematical models of many practical problems lead to the solution of SLAEs with large and sparse coefficient matrices, in which most elements are equal to zero. Attributing the property of sparsity to a matrix is ​​equivalent to asserting the existence of an algorithm that uses its sparsity. When a large proportion of the coefficients of a matrix consist of zeros, it is quite obvious that we would not want to store all these zeros in the computer's memory. Therefore, matrix algorithms must be designed in such a way that only non-zero elements are processed and that, based on prior knowledge of the location of non-zero elements, operations such as addition with zero or multiplication by zero are avoided. Thus, the number of operations performed by the machine when executing the algorithm is proportional to the number of non-zero elements, and not to the number of all elements of the matrix. A serious problem when working with sparse matrices is numerical stability.

When methods such as Gaussian elimination require too much time or memory to solve systems of equations, iterative methods are used. When solving ill-conditioned sparse SLAEs, it becomes necessary to choose a method that will allow one to obtain an exact result and the least filling (the appearance of new non-zero elements) when solving. The most effective and stable among iterative methods for solving such systems of equations are the so-called projection methods, and especially the class of them that is associated with projection onto Krylov subspaces. These methods have a number of advantages: they are stable, allow efficient parallelization, work with various row (column) formats and preconditioners of different types.

Formulation of the problem

Consider a system of linear algebraic equations

Where: - ill-conditioned sparse matrix,

.

This paper provides a comparative analysis of iterative methods for solving ill-conditioned sparse SLAEs. The following methods were selected: the conjugate gradient method (CG), the minimal residual method (MinRes), the dual conjugate direction method (CGS), and quasi-minimal residuals (QMR).

When choosing one or another method for solving SLAEs, it is important to take into account the structure of the matrix A. This is due to the fact that not every method makes it possible to obtain a guaranteed result for a certain system of linear equations.

Thus, the criterion for comparing iterative methods for solving SLAEs will be: the error of the results, the speed of convergence, and the structure of the matrix.

The results of numerical studies have shown that to solve SLAEs with a matrix A that is symmetric/asymmetric and well conditioned to normal equations, it is better to use the CG method. If the matrix A is symmetric and poorly conditioned, then the MinRes method showed the best convergence. For A - asymmetric, ill-conditioned - the method of quasi-minimal residuals.

To improve the convergence rate of iterative methods, preconditioning of the system matrix is ​​used. It lies in the fact that such a preconditioning matrix is ​​selected such that the procedure for solving the SLAE is not too labor-intensive and numerically stable. The correct choice of preconditioner, depending on the specific problem, can greatly speed up convergence. In fact, a good preconditioner is often necessary for an iterative method to converge at all.

In this paper, several types of preconditioning were considered for the method of quasi-minimal residuals with sparse ill-conditioned matrices: left and right preconditioning using QR decomposition, left and right preconditioning using LU decomposition, as well as using a modification of LU decomposition.

Table 1.

Comparison of relative error of preconditioners

Matrix

L.U.- decomposition

L.U.- decomposition (modification)

QR- decomposition

(left)

(right)

(left)

(right)

Conclusion

The article considered the method of quasi-minimal residuals in relation to solving sparse ill-conditioned SLAEs and various options for choosing a preconditioner. The method of quasi-minimal residuals, based on the use of a preconditioner obtained by modifying the LU decomposition, gave the best result in terms of numerical stability.

Bibliography:

1. Golub J., Van Loon Ch. Matrix calculations / Ed. V.V. Vojvodina. - M.: "Mir", 1999. - 548 p.

2. Demmel J. Computational linear algebra. Theory and applications / Transl. from English H.D. Ikramova. - M.: “Mir”, 2001. - 430 p.

3. Pissanetski S. Technology of sparse matrices / Ed. H.D. Ikramova - M.: “Mir”, 1988. - 410 p.

4.Stankevich, I.V. Storage and use of sparse matrices in finite element technology. Journal "Science and Education". - 2005. - October 10.

5. Tewarson R. Sparse matrices / Ed. H.D. Ikramova. - M.: “Mir”, 1977. - 172 p.

6.Bucker Martin, Basermann Achim. A comparison of QMR, CGS and TFQMR on a distributed memory machine / Bucker Martin //Mathematics of computation. - 1994 - May 31

7.Harwell-Boeing Collection - [Electronic resource] – Access mode. - URL: http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/ (access date: 12/15/2012)

8. Roland W. Freund, Noel M. Nachtigal. QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems / Roland W. Freund, Noel M. Nachtigal // Journal Math. - 1991. - No. 60. - p. 315-339.

9.Saad, Y. Iterative methods for sparse linear systems / Y. Saad. // SIAM. - 2003. - 447 p.

Transcript

1 6. Degenerate and ill-conditioned SLAEs 1 6. Degenerate and ill-conditioned SLAEs Let us now consider two types of SLAEs (27) with a square matrix A of size MxM: degenerate system (with zero determinant A =0); ill-conditioned system (the determinant A is not equal to zero, but the condition number is very large). Despite the fact that these types of systems of equations differ significantly from each other (for the first there is no solution, but for the second there is only one), from the practical point of view of the computer, there is much in common between them. A degenerate system is a system described by a matrix with a zero determinant A =0 (singular matrix). Since some equations included in such a system are represented by a linear combination of other equations, then, in fact, the system itself is underdetermined. It is easy to realize that, depending on the specific type of the right-hand side vector b, there is either an infinite number of solutions or none. Let's consider the first case, when the SLAE A x=b with a singular square matrix A does not have a single solution. This option comes down to constructing a normal pseudo-solution (i.e., choosing from an infinite set of solutions the one that is closest to a certain, for example, zero, vector). Let us give an example of such a problem (for a system of two equations) A= , b= (37) SLAE (37) is illustrated in Fig. 19, which shows that the two equations defining the system define two parallel lines on the plane (x 1, x 2). Lines do not intersect at any point

2 2 6. Degenerate and ill-conditioned SLAEs at one point of the coordinate plane, and, accordingly, no solution to the system exists. Note that the SLAE, defined by a nonsingular square matrix of size 2x2, defines a pair of intersecting lines on the plane (see figure below). It is also worth saying that if the system were consistent, then the geometric representation of the equations would be two coinciding lines describing an infinite number of solutions. Rice. 19. Graphical representation of an incompatible SLAE Fig. 20. Graph of sections of the residual f(x)= A x b depending on x 1 It is easy to guess that in the singular case under consideration there will be infinitely many pseudo-solutions of system (37) minimizing the residual A x b, and they will lie on the third straight line parallel to two shown in Fig. 19 and located in the middle between them. This is illustrated in Fig. 20, which shows several sections of the residual function f(x) = A x b, which indicate the presence of a family of minima of the same depth. To determine a unique solution, one should select from the entire set of pseudo-solutions the one that has

3 6. Degenerate and ill-conditioned SLAEs 3 by the smallest norm. Thus, in the singular case, to obtain a distinguished solution, it is necessary to numerically solve a multidimensional minimization problem. However, as we will see later, a more efficient way is to use regularization or orthogonal matrix decompositions (see 7 and 10, respectively). Let us now turn to ill-conditioned systems, i.e. SLAE with matrix A, whose determinant is not equal to zero, but the condition number A -1 A is large. Despite the fact that ill-conditioned systems have a unique solution, in practice it often makes no sense to look for this solution. Let us consider the properties of ill-conditioned SLAEs using two specific examples of very close ill-conditioned SLAEs with the same right-hand side b and slightly different matrices A and B: A= B=, b=, 3 5. (38) Despite the proximity of these systems, their exact solutions turn out to be very far from each other, namely: y A = , y B = (39) If we remember the presence of noise, i.e. about the always present error in input data, it becomes clear that solving ill-conditioned systems using standard methods makes no sense at all. Recall that problems for which small model errors (matrix A and vector b) lead to large solution errors are called incorrect. Thus, ill-conditioned SLAEs are a typical example of ill-posed problems. In addition, it should be noted that for a system of two equations it is easy to obtain an exact solution, but when solving a high-dimensional SLAE (including with the “exact” algorithm

4 4 6. Degenerate and ill-conditioned Gaussian SLAEs) even minor rounding errors that inevitably accumulate during calculations lead to huge errors in the results. The question arises: does it make sense to look for a numerical solution if it is known in advance that, due to the instability of the problem itself, it may turn out to be completely incorrect? To further understand the reason for the incorrectness, it is useful to compare the graphical interpretation of the well (Fig. 21) and poorly (Fig. 22) conditioned system of two equations. The solution to the system is visualized by the point of intersection of two straight lines representing each of the equations. Rice. 21. Graph of a well-conditioned SLAE Fig. 22. Graph of ill-conditioned SLAE From Fig. 22 it can be seen that the straight lines corresponding to the ill-conditioned SLAE are located in close proximity to each other (almost parallel). In this regard, small errors in the location of each of the lines can lead to significant errors in localizing the point of their intersection (solutions of the SLAE), as opposed to the case of a well-conditioned system, when small errors in the slope of the lines have little effect on the location of their intersection point (Fig. 21) .

5 6. Degenerate and ill-conditioned SLAEs 5 Note that the ill-conditioned matrix is ​​also typical when reconstructing experimental data given by overdetermined (incompatible) SLAEs (for example, in tomography problems). To solve ill-posed problems, in particular, degenerate and ill-conditioned SLAEs, a very effective method called regularization has been developed. It is based on taking into account additional a priori information about the structure of the solution, which is very often available in practical cases.


10. QR- and SVD-decompositions: “bad” SLAEs 1 10. QR- and SVD-decompositions: “bad” SLAEs Among matrix decompositions, a special role is played by orthogonal ones, which have the property of preserving the norm of the vector. Let us remind you

7. Regularization 1 7. Regularization To solve ill-posed problems, the Soviet mathematician Tikhonov proposed a simple but extremely effective method called regularization and based on involving

Example: weighing 1 Example: weighing Let us give an even simpler interpretation of the inverse problem associated with processing the results of an experiment, for example, weighing objects of two types

Topic Numerical methods of linear algebra - - Topic Numerical methods of linear algebra Classification There are four main sections of linear algebra: Solving systems of linear algebraic equations (SLAEs)

UDC 55 Isabekov KA Madanbekova EE YSU named after KTynystanov ABOUT AN APPROXIMATE SOLUTION OF ILL-CONDITIONED SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS This article presents algorithms for two methods for solving poorly

Special computing workshop with scientific research Nikolai Matveevich Andrushevsky, Faculty of Computer Science, Moscow State University Abstract The workshop is based on a detailed study of the method of singular value decomposition of matrices and its application

Overdetermined systems of linear equations Skalko Yuriy Ivanovich Tsybulin Ivan Shevchenko Alexander Overdetermined SLAEs Overdetermined SLAEs Consider the SLAE Ax = b, but in the case when there are more equations,

Systems of linear algebraic equations Basic concepts A system of linear algebraic equations (SLAE) is a system of the form a a a, a a a, a a a It can be represented as a matrix equation

Ne LA exam for bachelors of economics in the 04-0 academic year, Find the vector Ne (6 4; 6 8) and Ne DEMO option 0 (x; y) (for which Ne and x< 0) такой, чтобы система векторов (x ; y) образовывала бы ортогональный

Equation of a line in space 1 A line is the intersection of two planes. A system of two linear equations with three unknowns. A straight line in space can be defined as the intersection of two planes. Let

LECTURE 6 SPECTRAL TASKS. Descent methods In the last lecture, iterative methods of variational type were considered. For the system Au = f, for which A = A, the functional Φ(u, u) was introduced

11. Linear reduction 1 11. Linear reduction Let's finish our conversation about linear inverse problems by presenting another approach called reduction. Essentially, it is very close to regularization (in some

01 1. Find the general and basic solutions of the system of equations: x + x + 3x = 26, 2x 12x x = 22, x + 3x + 2x = 20, choosing x and x as the basic variables. Answer: If we choose as basic variables

Demo 01 1. Find the general and basic solutions to the system of equations: x + x + 3x = 26, 2x 12x x = 22, x + 3x + 2x = 20, choosing x and x as the basic variables. 2. Find the basis of the system

Moscow State Technical University named after NE Bauman Faculty of Fundamental Sciences Department of Mathematical Modeling ÀÍ Ê asiakov,

UDC 57.9 Igrunova S.V., Candidate of Sociological Sciences, Associate Professor, Associate Professor of the Department of Information Systems Russia, Belgorod Kichigina A.K. 4th year student, Institute of Engineering Technologies and Natural Sciences

6 Function approximation methods. Best approximation. The approximation methods discussed in the last chapter require that the grid function nodes strictly belong to the resulting interpolant. If you don't demand

ELEMENTS OF LINEAR ALGEBRA CLASSIFICATION OF MATRIX AND OPERATIONS ON THEM Define a matrix Classification of matrices by size What are zero and identity matrices? Under what conditions are matrices considered equal?

) Concept of SLAE) Cramer's rule for solving SLAE) Gaussian method 4) Rank of the matrix, Kronecker-Capelli theorem 5) Solving SLAE by matrix inversion, concept of conditioning of matrices) Concept of SLAE O. SLAE system

Parallel calculations in tomography Algebraic methods of computational tomography. Computational tomography problem in discrete form Computational tomography problem in discrete form. In contrast

LECTURE 2 NUMERICAL SOLUTION OF SLAE As a rule, when solving most practical problems, the problem of solving systems of linear algebraic equations (SLAE) occurs in the form of some auxiliary subtask.

Samples of basic problems in LA Gaussian method Certain systems of linear equations Solve a system of linear equations using the Gaussian method x 6 y 6 8, 6 x 6 y 6 Solve a system of linear equations using the Gaussian method 6

Operations Research Definition An operation is an event aimed at achieving a certain goal, allowing for several possibilities and their management Definition Operations Research a set of mathematical

Lecture 3. 3. Newton's method (tangents. Let's set some initial approximation [,b] and linearize the function f(in the neighborhood using a segment of the Taylor series f(= f(+ f "((-. (5 Instead of the equation (we solve

Equations of a line and a plane Equation of a line on a plane.. General equation of a line. A sign of parallelism and perpendicularity of lines. In Cartesian coordinates, each straight line on the Oxy plane is defined

Moscow State Technical University named after N.E. Bauman Faculty of Fundamental Sciences Department of Mathematical Modeling A.N. Kasikov,

Examples of completing test papers during distance learning Test paper 1 (CR-1) Topic 1. Linear algebra Task 1 It is necessary to solve the system of equations presented in the task in the form Constant parameters

Moscow State Technical University named after. N.E. Bauman Faculty Fundamental Sciences Department Higher Mathematics Analytical Geometry Module 1. Matrix algebra. Vector algebra Lecture

Ticket. Matrices, actions on them.. Equation of a parabola in the canonical coordinate system. Ticket. Properties of matrix operations. The relative position of a line and a plane. The angle between them, parallelism conditions

3 CONTENTS 1. Goals and objectives of the discipline 4. Place of the discipline in the structure of the BOP 4 3. Structure and content of the discipline 5 3.1. Structure of the discipline 5 3.. Contents of the discipline 6 4. List of educational and methodological materials

PRACTICAL LESSONS Lesson NECESSARY AND SUFFICIENT CONDITIONS FOR AN UNCONDITIONAL EXTREMUM Statement of the problem Given a twice continuously differentiable function f (), defined on the set X R Required to investigate

Solutions to problems in algebra for the second semester D.V. Gorkovets, F.G. Korablev, V.V. Korableva 1 Linear vector spaces Problem 1. Are the vectors in R4 linearly dependent? a 1 = (4, 5, 2, 6), a 2 = (2, 2, 1,

Federal State Educational Budgetary Institution of Higher Professional Education "Financial University under the Government of the Russian Federation" (Financial University) DEPARTMENT OF "MATHEMATICS"

Xətti ər Rus) üui ithhn sullrı Show that the vector;;) ;;) ; ;) form the basis of the vector and write a linear combination of the vector If;;) on these vectors find X from the equation Show that the vector;)

Kronecker-Capelli theorem. Solving SLAEs using the Gaussian method. Matrix rank. Consider a rectangular matrix with m rows and columns: A. m m m Let us select arbitrary rows and columns in this matrix. Elements

Systems of linear equations with two variables A system of equations of the form is called a system of linear equations with two variables. The solution to a system of equations in two variables is a pair of values

LINEAR ALGEBRA Lecture Line and plane in space Contents: Equation of a plane Mutual arrangement of planes Vector-parametric equation of a line Equations of a line from two points Line

ST. PETERSBURG STATE UNIVERSITY Faculty of Applied Mathematics of Control Processes A. P. IVANOV, Y. V. OLEMSKOY PRACTICUM ON NUMERICAL METHODS MINIMIZATION OF QUADRATIC FUNCTION Methodical

0 g 6 Proceedings FORA CONDITION NUMBER OF A MATRIX AS AN INDICATOR OF STABILITY IN SOLVING APPLIED PROBLEMS R Tsey, MM Shumafov Adygea State University, Maikop Condition number of a matrix

MATRICES, DETERMINANTS, SYSTEMS OF LINEAR EQUATIONS Method of bordering minors for finding the rank of a matrix A = m m m minor Minor k of order k of a matrix A is any determinant of the kth order of this matrix,

LECTURE 4 ITERATIVE METHODS FOR SOLVING SLAE In order to reduce the error associated with rounding, resort to the following algorithm Let u be an exact solution of the system, u a numerical solution Then we introduce

1. Linear systems and matrices 1. Define matrix multiplication. Is this operation commutative? Explain the answer. The product C of matrices A and B is defined as m p m p A B ij = A ik B kj. The operation is not commutative.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION TOMSK STATE UNIVERSITY OF CONTROL SYSTEMS AND RADIO ELECTRONICS (TUSUR) Yu.E. Voskoboynikov A.A. Mitzel INCORRECT MATHEMATICAL PROBLEMS

NUMERICAL METHODS OF LINEAR ALGEBRA The section “Numerical methods of linear algebra” discusses numerical methods for solving systems of linear algebraic equations (SLAEs) and numerical methods for solving problems

ANALYTICAL GEOMETRY 3 STREAM Lecturer P. V. Golubtsov 1.1. Vectors. List of questions for the first part of the exam 1. Formulate the definition of linear operations on vectors. List properties of linear operations

Systems of linear algebraic equations Consider a system of m linear algebraic equations with unknowns b b () m m m bm System () is called homogeneous if all its free terms b b b m are equal

4. Systems of linear equations. Basic concepts An equation is called linear if it contains unknowns only to the first degree and does not contain products of unknowns, i.e. if it has the form + + +

Linear algebra Lecture 7 Vectors Introduction In mathematics there are two kinds of quantities: scalars and vectors. A scalar is a number, and a vector is intuitively understood as an object that has magnitude and direction Vector calculus

List of questions for the exam on numerical methods (May 28, 2018) 0.1 Numerical integration 1. List methods for calculating improper integrals. Construct a quadrature formula to calculate the integral

Parallel calculations in tomography Simple iteration method. The method of steepest descent. ART method. SIRT method. In the simple iteration method, the relaxation factors τ k and matrices H k do not depend on the number

Introduction to Linear Matrix Algebra. Definition. A table of m n numbers of the form m m n n mn consisting of m rows and n columns is called a matrix. The elements of the matrix are numbered similarly to the elements of the determinant

LECTURE 7 INTERPOLATION At the last lecture, the problem of solving an overdetermined system was considered. Such a system has the form: a 11 x 1 + a 1 x + + a 1 x = f 1, ( a 1 x 1 + a x + + a x = f, ( a 1 x 1 + a x

THEORY QUESTIONS I. MATRICES, DETERMINANTS 1) Give a definition of a matrix. What are zero and identity matrices? Under what conditions are matrices considered equal? How is the transposition operation performed? When

Lecture 7 REDUCING A SECOND ORDER CURVE TO A CANONICAL FORM. Transformation of bases and coordinates on the plane Let two rectangular Cartesian coordinate systems with a common origin be given on the plane:

Linear algebra Module 1. Linear and Euclidean spaces. Linear operators in linear space Lecture 1.4 Abstract Eigenvectors and eigenvalues ​​of a linear operator, their properties.

UDC. SYNTHESIS OF RECURSIVE DIGITAL FILTERS BY IMPULSE CHARACTERISTICS DETERMINED BY AN ELEMENTARY MATHEMATICAL FUNCTION Nikitin D.A., Khanov V.Kh. Introduction In the modern arsenal of methods for synthesizing recursive

Chapter 8 Functions and graphs Variables and dependencies between them. Two quantities are called directly proportional if their ratio is constant, that is, if =, where is a constant number that does not change with changes

Gauss method (method of eliminating unknowns) Two systems are called equivalent (equivalent) if their solutions coincide. You can go to an equivalent system using elementary transformations

It is known what difficulties are associated with solving so-called ill-conditioned systems of linear algebraic equations: small changes in the right-hand sides of such systems can correspond to large (beyond acceptable limits) changes in the solution.

Consider the system of equations

Аz=u, (3; 2,1)

Where A -- matrix with elements a ij , A=(a ij ), z -- the desired vector with coordinates z j , z=(z j ), And -- known vector with coordinates And i ,u= (u i ), i, j =1, 2, ..., P. System (3; 2,1) is called degenerate, if the determinant of the system is zero, detA = 0. In this case the matrix A has zero eigenvalues. For ill-conditioned systems of this type, the matrix A has eigenvalues ​​close to zero.

If calculations are performed with finite accuracy, then in some cases it is not possible to establish whether a given system of equations is degenerate or ill-conditioned. Thus, ill-conditioned and degenerate systems may be indistinguishable within a given precision. Obviously, this situation occurs in cases where the matrix A has eigenvalues ​​quite close to zero.

In practical problems, the right-hand side is often And and matrix elements A, i.e., the coefficients of the system (3; 2,1) are known approximately. In these cases, instead of the system (3;2,1) we are dealing with some other system Az= And such that ||A-A||<=h, ||u-u||<=--d, где смысл норм обычно определяется характером задачи. Имея вместо матрицы A matrix A, we all the more cannot make a definite judgment about the degeneracy or non-degeneracy of system (3; 2.1).

In these cases, about the exact system Аz=u, the solution of which must be determined, we only know that for the matrix A and right side And the inequalities ||A-A||<=h, ||u-u||<=--d. Но систем с такими исходными данными (Ah, and) infinitely many, and within the level of error known to us they are indistinguishable. Since instead of the exact system (3; 2.1) we have an approximate system Аz= and, then we can only talk about finding an approximate solution. But the approximate system Az=u may be insoluble. The question arises:

what should be understood as an approximate solution of system (3; 2.1) in the described situation?

Among the “possible exact systems” there may also be degenerate ones. If they are solvable, then they have infinitely many solutions. Which of them should we be talking about approximate finding?

Thus, in a large number of cases we must consider a whole class of systems of equations that are indistinguishable from each other (within a given level of error), among which there may be both degenerate and unsolvable. The methods for constructing approximate solutions of systems of this class must be the same and general. These solutions must be robust to small changes in the initial data (3; 2.1).

The construction of such methods is based on the idea of ​​“selection”. Selection can be carried out using special, pre-specified functionals W[ z ] included in the problem statement.

A non-negative functional W[ z ] defined on an everywhere dense subset F 1 of F in F is called stabilizing functionality, If:

  • a) the element z T belongs to its domain of definition;
  • b) for any number d>0 the set F 1,d elements z from F 1 for which
  • W[z]

So, let’s consider an arbitrary system of linear algebraic equations (in short, SLAEs)

Az =u, (3; 2,2)

in which z and u are vectors, z=(z 1, z 2, ...,z n)-OR n, And=(u 1 ,u 2 , ... ,u n)--OR m , A-- matrix with elements a ij , A= (a ij ), where j =1, 2, ..., n; i= 1, 2, ..., T, and number P does not have to be equal to the number T.

This system can be uniquely solvable, degenerate (and have infinitely many solutions) and unsolvable.

Pseudo-solution system (3; 2,2) is called the vector z that minimizes the discrepancy || Az - u || over the entire space Rn. System (3; 2,2) may have more than one pseudo-solution. Let F A be the set of all its pseudosolutions and let z 1 be some fixed vector from Rn, usually determined by the statement of the problem.

Normal relative to the vector z 1 solution of system (3;2,2) will be called a pseudo-solution z 0 with minimal norm || z - z 1 ||, i.e. such that

|| z 0 - z 1 || =

Here. In what follows, for simplicity of notation, we will assume that z 1 = 0 and the solution normal with respect to the vector z 1 = 0 will simply be called normal solution.

For any system of the form (3; 2,2) a normal solution exists and is unique.

Remark 1. The normal solution z° of the system (3;2,2) can also be defined as a pseudo-solution that minimizes a given positive definite quadratic form with respect to the coordinates of the vector z--z 1 . All the results presented below remain valid.

Remark 2. Let the rank of the matrix A degenerate system (3; 2,1) is equal to r < n and z r+1 ,z r+2 , … , z n - basis of linear space N A , consisting of elements z for which Аz=0, N A = ( z; Аz= 0). Solution z° of system (3; 2,1), satisfying n--r orthogonality conditions

(z 0 - z 1 , z S)= 0, S= r + 1, r + 2, .. ,n, (3; 2,3)

is determined uniquely and coincides with the normal solution.

It is easy to see that the problem of finding a normal solution to system (3; 2,2) is ill-posed. In fact, let A -- symmetric matrix. If it is non-degenerate, then by orthogonal transformation

z = Vz*, u = Vu*

her can be reduced to diagonal form and the transformed system will have the form

l i z i *=u i * , i= 1, 2,. .., P,

where l i are the eigenvalues ​​of the matrix A.

If the symmetric matrix A -- is non-degenerate and has rank r, then n - r of its eigenvalues ​​are equal to zero. Let

l i №0 for i=1, 2, ..., r;

l i =0 for i=r+1,r+2, …, n.

We assume that system (3; 2,2) is solvable. In this case, u i *= 0 for i =r + 1, ..., n.

Let the “initial data” of the system (A And And) specified with an error, i.e. instead of A And And their approximations are given A And u:

|| A - A ||<=h, ||u - u||<=d . При этом

Let l i -- matrix eigenvalues A. It is known that they continuously depend on A in the norm (3; 2.4). Consequently, the eigenvalues ​​l r+1 , l r+2 , …,l n can be arbitrarily small for sufficiently small h .

If they are not equal to zero, then

Thus, there will be disturbances of the system within any sufficiently small error A And And, for which some z i * will take any predetermined values. This means that the problem of finding a normal solution to system (3; 2,2) is unstable.

Below is a description of the method for finding a normal solution to the system (3; 2.2), stable to small (in the norm (3; 2.4)) perturbations of the right side And, based on the regularization method.

Share with friends or save for yourself:

Loading...