The matrix determinant is a number that characterizes the square matrix A and is closely related to the solution of systems linear equations. The determinant of matrix A is denoted by or . Any square matrix A of order n is assigned, according to a certain law, a calculated number called the determinant, or determinant, of the nth order of this matrix. Consider the determinants of the second and third orders.

Let the matrix

,

then its second-order determinant is calculated by the formula

.

Example. Calculate the determinant of matrix A:

Answer: -10.

The third order determinant is calculated by the formula

Example. Calculate the determinant of matrix B

.

Answer: 83.

The calculation of the nth order determinant is based on the properties of the determinant and the following Laplace theorem: the determinant is equal to the sum products of elements of any row (column) of the matrix and their algebraic complements:

Algebraic addition element equals , where is the element minor, obtained by deleting the i-th row and the j-th column in the determinant.

Minor the order of the element of the matrix A is the determinant of the matrix (n-1)-th order, obtained from the matrix A by deleting the i-th row and the j-th column.

Example. Find algebraic complements of all elements of matrix A:

.

Answer: .

Example. Calculate the matrix determinant of a triangular matrix:

Answer: -15.

Properties of determinants:

1. If any row (column) of the matrix consists of only zeros, then its determinant is 0.

2. If all the elements of any row (column) of the matrix are multiplied by a number, then its determinant will be multiplied by this number.

3. When transposing a matrix, its determinant will not change.

4. When two rows (columns) of a matrix are interchanged, its determinant changes sign to the opposite.

5. If a square matrix contains two identical rows (columns), then its determinant is 0.

6. If the elements of two rows (columns) of a matrix are proportional, then its determinant is 0.

7. The sum of the product of the elements of any row (column) of the matrix by the algebraic complements of the elements of another row (column) of this matrix is ​​0.

8. The matrix determinant will not change if the elements of any row (column) of the matrix are added to the elements of another row (column), previously multiplied by the same number.

9. The sum of the products of arbitrary numbers and the algebraic complements of the elements of any row (column) is equal to the determinant of the matrix obtained from the given one by replacing the elements of this row (column) with numbers.

10. The determinant of the product of two square matrices is equal to the product of their determinants.

Inverse matrix.

Definition. A matrix is ​​called the inverse of a square matrix A if, when this matrix is ​​multiplied by the given one both on the right and on the left, the identity matrix is ​​obtained:

.

It follows from the definition that only a square matrix has an inverse; in this case, the inverse matrix is ​​also square of the same order. If the determinant of a matrix is ​​nonzero, then such a square matrix is ​​called nondegenerate.

Necessary and sufficient condition for the existence of an inverse matrix: An inverse matrix exists (and is unique) if and only if the original matrix is ​​nonsingular.

The first algorithm for calculating the inverse matrix:

1. Find the determinant of the original matrix. If the determinant is non-zero, then the original matrix is ​​nonsingular and the inverse matrix exists.

2. Find the matrix transposed to A.

3. We find the algebraic complements of the elements of the transposed matrix and compose the adjoint matrix from them.

4. Calculate the inverse matrix by the formula: .

5. We check the correctness of the calculation of the inverse matrix, based on its definition .

Example.

.

Answer: .

The second algorithm for calculating the inverse matrix:

The inverse matrix can be calculated based on the following elementary transformations on the rows of the matrix:

Swap two lines;

Multiplying a matrix row by any non-zero number;

Adding to one row of a matrix another row, multiplied by any non-zero number.

In order to calculate the inverse matrix for the matrix A, it is necessary to compose the matrix , then by elementary transformations bring the matrix A to the form of the identity matrix E, then in place of the identity matrix we get the matrix .

Example. Calculate the inverse matrix for matrix A:

.

We compose a matrix B of the form:

.

Element = 1 and the first line containing this element will be called guides. Let's carry out elementary transformations, as a result of which the first column is transformed into a single column with a unit in the first row. To do this, to the second and third lines, add the first line, respectively multiplied by 1 and -2. As a result of these transformations, we get:

.

Finally we get

.

Where .

Matrix rank. The rank of a matrix A is called highest order non-zero minors of this matrix. The rank of matrix A is denoted by rang(A) or r(A).

It follows from the definition: a) the rank of a matrix does not exceed the smallest of its dimensions, i.e. r(A) is less than or equal to the minimum of the numbers m or n; b) r(A)=0 if and only if all elements of the matrix A are equal to zero; c) for square matrix n-th order r(A)=n if and only if the matrix A is nonsingular.

Example: calculate the ranks of matrices:

.

Answer: r(A)=1. Answer: r(A)=2.

We call the following matrix transformations elementary:

1) Rejection of the zero row (column).

2) Multiplication of all elements of a row (column) of a matrix by a non-zero number.

3) Changing the order of rows (columns) of the matrix.

4) Adding to each element of one row (column) the corresponding elements of another row (column), multiplied by any number.

5) Matrix transposition.

The rank of a matrix does not change under elementary matrix transformations.

Examples: Calculate matrix , where

; ;

Answer: .

Example: Calculate matrix , where

; ; ; E is the identity matrix.

Answer: .

Example: Calculate matrix determinant

.

Answer: 160.

Example: Determine if matrix A has an inverse, and if so, calculate it:

.

Answer: .

Example: Find the rank of a matrix

.

Answer: 2.

2.4.2. Systems of linear equations.

The system of m linear equations with n variables has the form:

,

where , are arbitrary numbers, called, respectively, the coefficients of the variables and the free terms of the equations. The solution of a system of equations is such a set of n numbers (), when substituting which each equation of the system turns into a true equality.

A system of equations is called consistent if it has at least one solution, and inconsistent if it has no solutions. A joint system of equations is called definite if it has only decision, and indefinite if it has more than one solution.

Cramer's theorem: Let - the determinant of the matrix A, composed of the coefficients of the variables "x", and - the determinant of the matrix obtained from the matrix A by replacing the j-th column of this matrix with a column of free members. Then, if , then the system has a unique solution, determined by the formulas: (j=1, 2, …, n). These equations are called Cramer's formulas.

Example. Solve systems of equations using Cramer's formulas:

Answers: (4, 2, 1). (1, 2, 3) (1, -2, 0)

Gauss method- the method of successive elimination of variables, consists in the fact that with the help of elementary transformations the system of equations is reduced to an equivalent system of a stepped (or triangular) form, from which all other variables are found sequentially, starting from the last variables by number.

Example: Solve systems of equations using the Gaussian method.

Answers: (1, 1, 1). (1, -1, 2, 0). (1, 1, 1).

For consistent systems of linear equations, the following statements are true:

· if the rank of the matrix of the joint system is equal to the number of variables, i.e. r = n, then the system of equations has a unique solution;

· if the rank of the matrix of the joint system is less than the number of variables, i.e. r

2.4.3. Technology for performing operations on matrices in the EXCEL environment.

Let's consider some aspects of working with the Excel spreadsheet processor, which allow us to simplify the calculations necessary to solve optimization problems. A spreadsheet processor is a software product designed to automate the processing of data in a tabular form.

Working with formulas. In spreadsheet programs, formulas are used to perform many different calculations. Using Excel, you can quickly create a formula. The formula has three main parts:

Equal sign;

Operators.

Use in function formulas. To make it easier to enter formulas, you can use Excel functions. Functions are formulas built into Excel. To activate a particular formula, press the buttons Insert, Functions. In the window that appears Function Wizard on the left is a list of function types. After selecting the type, a list of the functions themselves will be placed on the right. The choice of functions is carried out by clicking the mouse button on the corresponding name.

When performing operations on matrices, solving systems of linear equations, solving optimization problems, you can use the following Excel functions:

MULTIPLE - matrix multiplication;

TRANSPOSE - matrix transposition;

MOPRED - calculation of the determinant of the matrix;

MOBR - calculation of the inverse matrix.

The button is on the toolbar. Functions for performing operations with matrices are in the category Mathematical.

Matrix multiplication with a function MUMNOZH . The MULTIP function returns the product of matrices (matrices are stored in arrays 1 and 2). The result is an array with the same number of rows as array 1 and the same number of columns as array 2.

Example. Find the product of two matrices A and B in Excel (see Figure 2.9):

; .

Enter matrices A in cells A2:C3 and B in cells E2:F4.

Select the range of cells for the multiplication result - H2:I2.

Enter the formula for matrix multiplication =MMULT(A2:C3, E2:F4).

Press CTRL+SHIFT+ENTER.

Inverse Matrix Calculations Using the NIBR Function.

The MIN function returns the inverse of a matrix stored in an array. Syntax: NBR(array). On fig. 2.10 shows the solution of the example in the Excel environment.

Example. Find the matrix inverse to the given one:

.

Figure 2.9. Initial data for matrix multiplication.

.
Lecture 6
4.6 Determinant of the product of two square matrices.

Product of two square matrices n th order is always defined. Here the following theorem is of great importance.

Theorem. The determinant of the product matrix is ​​equal to the product of the determinants of the factor matrices:

Proof. Let

and
,

.

Compose an auxiliary determinant

.

By the corollary of Laplace's theorem, we have:

.

So,
, we will show that
. To do this, we transform the determinant as follows. first first P
, add to
-th column. Then the first P columns multiplied respectively by
, add to
-th column, etc. At the last step to
-th column will be added the first P columns multiplied respectively by
. As a result, we get the determinant

.

Expanding the resulting determinant using the Laplace theorem in terms of the last P columns, we find:



Thus, we have proved the equalities and , from which it follows that .
4.7 Inverse matrix

Definition 1 . Let a square matrix be given BUT P-th order. Square matrix
of the same order are called reverse to the matrix BUT, if , where E-identity matrix P-th order.

Statement. If there is a matrix inverse to the matrix BUT, then such a matrix is ​​unique.

Proof. Assume that the matrix is ​​not the only matrix inverse to the matrix BUT. Take another inverse matrix B. Then the conditions

Consider the product
. It has the equalities

from which it follows that
. Thus, the uniqueness of the inverse matrix is ​​proved.

When proving the theorem on the existence of an inverse matrix, we need the concept of "adjoint matrix".

Definition 2 . Let the matrix

whose elements are algebraic complements elements matrices BUT, is called attached matrix to matrix BUT.

Note that in order to construct the adjoint matrix FROM matrix elements BUT you need to replace them with algebraic complements, and then transpose the resulting matrix.

Definition 3. square matrix BUT called non-degenerate , if
.

Theorem. In order for the matrix BUT has an inverse matrix , it is necessary and sufficient that the matrix BUT was undegenerate. In this case, the matrix is ​​determined by the formula

, (1)

where are the algebraic complements of the matrix elements BUT.

Proof. Let the matrix BUT has an inverse matrix. Then the conditions are satisfied that imply . From the last equality we get that the determinants and
. These determinants are related by the relation
. matrices BUT and non-degenerate, since their determinants are nonzero.

Now let the matrix BUT non-degenerate. Let us prove that the matrix BUT has an inverse matrix and it is determined by formula (1). For this, consider the work

matrices BUT FROM.

By the rule of matrix multiplication, the element works
matrices BUT and FROM has the form: . Since the sum of the products of the elements i-th line on the algebraic complements of the corresponding elements j- th row is zero at
and the determinant at
. Consequently,

where E– identity matrix P-th order. The equality
. In this way,
, which means that
and matrix
is the inverse of the matrix BUT. Therefore, the nonsingular matrix BUT has an inverse matrix, which is determined by formula (1).

Corollary 1 . Matrix determinants BUT and are related by .

Consequence 2 . The main property of the associated matrix FROM to the matrix BUT expressed

equalities
.

Corollary 3 . Determinant of a nondegenerate matrix BUT and the matrix attached to it

FROM bound by equality
.

Corollary 3 follows from the equality
and properties of determinants, according to which, when multiplied by P- th power of this number. In this case

whence it follows that .

Example. Find matrix inverse to matrix BUT:

.

Solution. Matrix determinant

different from zero. Therefore, the matrix BUT has a reverse. To find it, we first calculate the algebraic complements:

,
,
,

,
,
,


,
.

Now, using formula (1), we write the inverse matrix

.
4.8. Elementary transformations over matrices. Gauss algorithm.

Definition 1. Under elementary transformations above size matrix

understand the following steps.


  1. Multiplication of any row (column) of a matrix by any non-zero number.

  2. addition to any i-th row of the matrix of any of its j- th line, multiplied by an arbitrary number.

  3. addition to any i-th column of a matrix of any of its j- th column multiplied by an arbitrary number.

  4. Permutation of rows (columns) of a matrix.
Definition 2. matrices BUT and AT we will call equivalent , if one of them can be transformed into the other by elementary transformations. Will write
.

Matrix equivalence has the following properties:


Definition 3 . stepped called matrix BUT having the following properties:

1) if i-th row is zero, i.e. consists of only zeros, then
-th string is also null;

2) if the first non-zero elements i-th and -th rows are placed in columns with numbers k and l, then
.

Example. matrices

and

are stepped, and the matrix

is not a step.

Let us show how, using elementary transformations, we can reduce the matrix BUT to a stepped view.

Gauss algorithm . Consider the matrix BUT size . Without loss of generality, we can assume that
. (If in the matrix BUT there is at least a non-zero element, then by interchanging the rows and then the columns, you can ensure that this element falls at the intersection of the first row and the first column.) Let's add to the second row of the matrix BUT first multiplied by
, to the third line - the first, multiplied by
etc.

As a result, we get

.

Items in recent
lines are defined by the formulas:

,
,
.

Consider the matrix

.

If all matrix elements are equal to zero, then

and the equivalent step matrix. If at least one of the matrix elements is nonzero, then we can assume without loss of generality that
(this can be achieved by rearranging the rows and columns of the matrix). Transforming in this case the matrix as well as the matrix BUT, we get

respectively,

.

Here
,
,
.

and , , … ,
. In the matrix BUT t rows and to bring it to A r , non-zero, and all minors of the order above r are equal to zero. The rank of a matrix will be denoted by the symbol
.

The rank of the matrix is ​​calculated by the method edging minors .


Example. Calculate the rank of a matrix using the fringing minor method

.

Solution.


The above method is not always convenient, because. associated with the calculation of a large

the number of determinants.

Statement. The rank of a matrix does not change under elementary transformations of its rows and columns.

The stated statement indicates the second way to calculate the rank of a matrix. It is called method of elementary transformations . To find the rank of a matrix, it is necessary to bring it to a stepped form using the Gaussian method, and then select the maximum nonzero minor. Let's explain this with an example.

Example. Using elementary transformations, calculate the rank of a matrix

.

Solution. Let's perform a chain of elementary transformations in accordance with the Gauss method. As a result, we obtain a chain of equivalent matrices:

  • 5. The theorem on the multiplication of a certain row of the determinant matrix by the same number. Determinant with two proportional rows.
  • 6. The theorem on the decomposition of the determinant into a sum of determinants and its consequences.
  • 7. The theorem on the decomposition of the determinant in terms of the elements of the row (column) and the consequences from it.
  • 8. Operations on matrices and their properties. Prove one of them.
  • 9. Matrix transposition operation and its properties.
  • 10. Definition of the inverse matrix. Prove that every invertible matrix has only one inversion.
  • 13. Block matrices. Addition and multiplication of block matrices. Theorem on the determinant of a quasi-triangular matrix.
  • 14. The theorem on the determinant of the product of matrices.
  • 15. The theorem on the existence of an inverse matrix.
  • 16. Determining the rank of a matrix. The basic minor theorem and its corollary.
  • 17. The concept of linear dependence of rows and columns of a matrix. Matrix rank theorem.
  • 18. Methods for calculating the rank of a matrix: the method of bordering minors, the method of elementary transformations.
  • 19. Applying elementary transformations of only rows (only columns) to finding the inverse matrix.
  • 20. Systems of linear equations. The criterion of compatibility and the criterion of certainty.
  • 21. Solution of a joint system of linear equations.
  • 22. Homogeneous systems of linear equations. Theorem on the existence of a fundamental system of solutions.
  • 23. Linear operations on vectors and their properties. Prove one of them.
  • 24. Determination of the difference of two vectors. Prove that for any vectors and the difference exists and is unique.
  • 25. Definition of the basis, the coordinates of the vector in the basis. Theorem on the expansion of a vector in terms of a basis.
  • 26. Linear dependence of vectors. Properties of the concept of linear dependence, prove one of them.
  • 28. Cartesian coordinate systems in space, on a plane and on a straight line. The theorem on a linear combination of vectors and consequences from it.
  • 29. Derivation of formulas expressing the coordinates of a point in one dsk through the coordinates of the same point in another dsk.
  • 30. Scalar product of vectors. Definition and basic properties.
  • 31. Vector product of vectors. Definition and basic properties.
  • 32. Mixed product of vectors. Definition and basic properties.
  • 33. Double cross product of vectors. Definition and formula for calculation (without proof).
  • 34. Algebraic lines and surfaces. Order invariance (invariance) theorems.
  • 35. General equations of the plane and the straight line.
  • 36. Parametric equations of the line and plane.
  • 37. Transition from the general equations of the plane and the straight line on the plane to their parametric equations. The geometric meaning of the coefficients a, b, c (a, c) in the general equation of the plane (straight line on the plane).
  • 38. Exclusion of a parameter from parametric equations on a plane (in space), canonical equations of a straight line.
  • 39. Vector equations of a straight line and a plane.
  • 40. General equations of a straight line in space, reduction to canonical form.
  • 41. Distance from a point to a plane. The distance from a point to a line. Other problems about lines and planes.
  • 42. Definition of an ellipse. Canonical equation of an ellipse. Parametric equations of an ellipse. Ellipse eccentricity.
  • 44. Definition of a parabola. Derivation of the canonical parabola equation.
  • 45. Curves of the second order and their classification. The main theorem about kvp.
  • 45. Surfaces of the second order and their classification. The main theorem about pvp. Surfaces of revolution.
  • 47. Definition of a linear space. Examples.
  • 49. Definition of Euclidean space. The length of the vector. Angle between vectors. Cauchy-Bunyakovsky inequality. Example.
  • 50. Definition of Euclidean space. Pythagorean theorem. Triangle Inequality Example.
  • 14. The theorem on the determinant of the product of matrices.

    Theorem:

    Proof: Let square matrices of order n be given.
    and
    . Based on the theorem on the determinant of a quasi-triangular matrix (
    ) we have:
    the order of this matrix is ​​2n. Without changing the determinant, we perform the following transformations on a matrix of order 2n: add to the first row . As a result of such a transformation, the first n positions of the first row will be all 0, and the second (in the second block) will contain the sum of the products of the first row of matrix A and the first column of matrix B. Having done the same transformations with 2 ... n rows, we get the following equality:

    To bring the right determinant to a quasi-triangular form, let's swap 1 and 1+ n columns, 2 and 2+ n … n and 2 n columns in it. As a result, we get the equality:

    Comment: It is clear that the theorem is valid for any finite number of matrices. In particular
    .

    15. The theorem on the existence of an inverse matrix.

    Definition: If a
    the matrix is ​​called non-non-singular (non-singular). If a
    then the matrix is ​​called degenerate (special).

    Consider an arbitrary square matrix A. From the algebraic complements of the elements of this matrix, we compose a matrix and transpose it. We get matrix C:
    matrix C is called attached with respect to matrix A. Calculating the product of A*C and B*C, we get
    Consequently
    , thus
    if
    .

    Thus, the existence of A -1 follows from the non-singularity of the matrix A. On the other hand, if A has A -1 then the matrix equation AX=E is solvable. Consequently
    and. Combining the obtained results we get the statement:

    Theorem: A square matrix over a field P has an inverse if and only if it is not singular. If the inverse matrix exists, then it is found by the formula:
    , where C is the associated matrix.

    Comment:



    16. Determining the rank of a matrix. The basic minor theorem and its corollary.

    Definition: The k-th order minor of a matrix A is the k-th order determinant with elements lying at the intersection of any k rows and any k columns.

    Definition: The rank of matrix A is the highest order other than 0 minors of this matrix. Denoted r(A). clear 0<=r(A)<=min(m,n). Таким образом еслиr(A)=rто среди миноров матрицы А есть минорr-го порядка отличны от 0, а все минорыr+1 порядка и выше равны 0.

    Definition: Any matrix minor other than 0 whose order is equal to the rank of the matrix is ​​called the basis minor of this matrix. It is clear that a matrix can have several base minors. The columns and rows that form the base minors are called base.

    Theorem: In the derivative matrix A=(a i) m , n, each column is a linear combination of the base columns in which the base minor is located (the same for the rows).

    Proof: Let r(A)=r. We choose one basic minor from the matrix. For simplicity, let's assume that the base minor is located in the upper left corner of the matrix, i.e. on the first r rows and first r columns. Then the base minor Mr will look like:
    . We need to prove that any column of matrix A is a linear combination of the first columns of this matrix in which the basis minor is located, i.e., it is necessary to prove that there are numbers λ j such that for any k-th column of the matrix A the equality takes place: where

    .

    Let's add some k-th column and s-th row to the basic minor:
    because if the added line or

    column are among the basic then the determinant
    , as a determinant with two identical rows (columns). If a row (column) is added then
    according to the definition of the rank of a matrix. Expand the determinant
    by the elements of the bottom row, we get: from here we get:
    where λ 1 … λ r do not depend on the number S, because And Sj do not depend on the elements of the added S-th row. Equality (1) is the equality we need. (p.t.d.)

    Consequence: If A is a square matrix and determinant A=0, then one of the columns of the matrix is ​​a linear combination of the remaining columns, and one of the rows is a linear combination of the remaining rows.

    Proof: If the determinant of a matrixA=0, then the rank of this matrix<=n-1,n-порядок матрицы. Поэтому, по крайней мере одна строка или один столбец не входят в число базисных. Эта строка (столбец) линейно выраженная через строки (столбцы) в которой расположен базисный минор, а значит линейно выраженная через остальные строки (столбцы).

    For [A] =0 it is necessary and sufficient that at least one row (column) is a linear combination of its other rows (columns).

    Theorem. Let A and B be two square matrices of order n. Then the determinant of their product is equal to the product of the determinants, i.e.

    | AB | = | A| | B|.

    < Пусть A = (aij) (n x n), B = (bij) (n x n). Рассмотрим определитель (d) (2n) порядка 2n

    (d) (2n) = | A | | b | (-1)(^1+...+n+1+...+n) = | A | | B|.

    If we show that the determinant (d) (2n) is equal to the determinant of the matrix C=AB, then the theorem will be proved.

    In (d) (2n) we will do the following transformations: to 1 row we add (n + 1) row multiplied by a11; (n+2) string multiplied by a12, etc. (2n) string multiplied by (a) (1n) . In the resulting determinant, the first n elements of the first row will be zero, and the other n elements will become like this:

    a11* b11 + a12 * b21 + ... + (a) (1n) * (d) (1n) = c11 ;

    a11* b12 + a12 * b21 + ... + (a) (1n) * (d) (2n) = c12 ;

    a11* (d) (1n) + a12 * (d) (2n) + ... + (a) (1n) * (d) (nn) = (c) (1n) .

    Similarly, we get zeros in 2, ..., n rows of the determinant (d) (2n) , and the last n elements in each of these rows will become the corresponding elements of the matrix C. As a result, the determinant (d) (2n) is transformed into an equal determinant:

    (d) (2n) = | c | (-1))(^1+...+n+...+2n) = |AB|. >

    Consequence. The determinant of the product of a finite number of square matrices is equal to the product of their determinants.

    < Доказательство проводится индукцией: | A1 ... (A) (j+1) | = | A1... Aj | | (A) (j+1) | = ... = | A 1 | ... | A i +1 | . Эта цепочка равенств верна по теореме.>

    INVERSE MATRIX.

    Let A = (aij) (n x n) be a square matrix over the field P.

    Definition 1. Matrix A will be called degenerate if its determinant is equal to 0. Matrix A will be called nondegenerate otherwise.

    Definition 2. Let А н Pn. A matrix B Î Pn will be called inverse to A if AB = BA=E.

    Theorem (criterion for matrix invertibility). Matrix A is invertible if and only if it is nondegenerate.

    < Пусть А имеет обратную матрицу. Тогда АА(^-1) = Е и, применяя теорему об умножении определителей, получаем | A | | A(^-1) | = | E | или | A | | A(^-1) | = 1. Следовательно, | A | ¹ 0.

    Let, back, | A | ¹ 0. We must show that there exists a matrix B such that AB = BA = E. As B we take the following matrix:

    where A ij is the algebraic complement to the element a ij . Then

    It should be noted that the result will be an identity matrix (it suffices to use Corollaries 1 and 2 from Laplace's theorem), i.e. AB \u003d E. Similarly, it is shown that BA \u003d E. >

    Example. For matrix A, find the inverse matrix, or prove that it does not exist.

    det A = -3 Þ the inverse matrix exists. Now we consider algebraic additions.

    A 11 \u003d -3 A 21 \u003d 0 A 31 \u003d 6

    A 12 \u003d 0 A 22 \u003d 0 A 32 \u003d -3



    A 13 \u003d 1 A 23 \u003d -1 A 33 \u003d -1

    So, the inverse matrix looks like: B = =

    Algorithm for finding the inverse matrix for a matrix

    1. Calculate det A.

    2. If it is equal to 0, then the inverse matrix does not exist. If det A is not equal

    0, we consider algebraic additions.

    3. We put the algebraic additions in the appropriate places.

    4. Divide all elements of the resulting matrix by det A.

    SYSTEMS OF LINEAR EQUATIONS.

    Definition 1. An equation of the form a1x1+ ....+an xn=b , where a, ... ,an are numbers; x1, ... ,xn are unknowns, is called a linear equation with n unknown.

    s equations with n unknown is called the system s linear equations with n unknown, i.e.

    (1)
    The matrix A, composed of the coefficients of the unknowns of system (1), is called the matrix of system (1). .

    If we add a column of free terms to matrix A, then we get the extended matrix of system (1).

    X = - column of unknowns. - column of free members.

    In matrix form, the system has the form: AX=B (2).

    The solution of system (1) is the ordered set n numbers (α1 ,…, αn) such that if we substitute into (1) x1 = α1, x2 = α2 ,…, xn = αn , then we obtain numerical identities.

    Definition 2. System (1) is called consistent if it has solutions, and inconsistent otherwise.

    Definition 3. Two systems are called equivalent if the sets of their solutions are the same.

    There is a universal way to solve system (1) - the Gauss method (the method of successive elimination of unknowns)

    Let us consider in more detail the case when s = n. There is a Cramer method for solving such systems.

    Let d = det ,

    dj - the determinant of d, in which the jth column is replaced by a column of free members.

    CRAMER'S RULE

    Theorem (Cramer's rule). If the determinant of the system is d ¹ 0, then the system has a unique solution obtained from the formulas:

    x1 = d1 / d …xn = dn / d

    <Идея доказательства заключается в том, чтобы переписать систему (1) в форме матричного уравнения. Положим



    and consider the equation AX = B (2) with unknown column matrix X. Since A, X, B are matrices of dimensions n x n, n x 1, n x 1 accordingly, the product of rectangular matrices AX is defined and has the same dimensions as the matrix B. Thus, equation (2) makes sense.

    The connection between system (1) and equation (2) is what is the solution to this system if and only if

    the column is the solution of equation (2).

    Indeed, this statement means that the equality

    The last equality, as an equality of matrices, is equivalent to the system of equalities

    which means that is a solution to system (1).

    Thus, the solution of system (1) is reduced to the solution of the matrix equation (2). Since the determinant d of matrix A is non-zero, it has an inverse matrix A -1 . Then AX = B z A(^-1)(AX) = A(^-1)B z (A(^-1)A)X = A(^-1)B z EX = A(^-1) In z X = A(^-1)B (3). Therefore, if equation (2) has a solution, then it is given by formula (3). On the other hand, A(A(^-1)B) = (A A(^-1))B = EB = B.

    Therefore, X \u003d A (^-1) B is the only solution to equation (2).

    Because ,

    where A ij is the algebraic complement of the element a ij in the determinant d, then

    whence (4).

    In equality (4) in parentheses is written the expansion by elements of the jth column of the determinant dj, which is obtained from the determinant d after the replacement in it

    j-th column by a column of free members. That's why, xj = dj/ d.>

    Consequence. If a homogeneous system of n linear equations from n of unknowns has a nonzero solution, then the determinant of this system is equal to zero.

    Theorem. Let A and B be two square matrices of order n. Then the determinant of their product is equal to the product of the determinants, i.e.

    | AB | = | A| | B|.

    ¢ Let A = (a ij) n x n , B = (b ij) n x n . Consider the determinant d 2 n of order 2n

    d 2n = | A | | b | (-1) 1 + ... + n + 1 + ... + n = | A | | B|.

    If we show that the determinant d 2 n is equal to the determinant of the matrix С=AB, then the theorem will be proved.

    Let's do the following transformations in d 2 n: add (n+1) row multiplied by a 11 to row 1; (n+2) string multiplied by a 12, etc. (2n) string multiplied by a 1 n . In the resulting determinant, the first n elements of the first row will be zero, and the other n elements will become like this:

    a 11 b 11 + a 12 b 21 + ... + a 1n b n1 = c 11;

    a 11 b 12 + a 12 b 22 + ... + a 1n b n2 = c 12;

    a 11 b 1n + a 12 b 2n + ... + a 1n b nn = c 1n.

    Similarly, we get zeros in 2, ..., n rows of the determinant d 2 n , and the last n elements in each of these rows will become the corresponding elements of the matrix C. As a result, the determinant d 2 n is transformed into an equal determinant:

    d 2n = | c | (-1) 1 + ... + n + ... + 2n = |AB|. £

    Consequence. The determinant of the product of a finite number of square matrices is equal to the product of their determinants.

    ¢ The proof is by induction: | A 1 ... A i +1 | = | A 1 ... A i | | A i +1 | = ... = = | A 1 | ... | A i +1 | . This chain of equalities is true by the theorem. £

    Inverse matrix.

    Let A = (a ij) n x n be a square matrix over the field Р.

    Definition 1. A matrix A will be called degenerate if its determinant is equal to 0. A matrix A will be called non-degenerate otherwise.

    Definition 2. Let А н P n . A matrix В О P n will be called inverse to А if АВ = ВА=Е.

    Theorem (criterion for matrix invertibility). The matrix A is invertible if and only if it is nondegenerate.

    ¢ Let A have an inverse matrix. Then AA -1 = E and, applying the theorem on multiplication of determinants, we obtain | A | | A -1 | = | e | or | A | | A -1 | = 1. Therefore, | A | ¹0.

    Let, back, | A | ¹ 0. We must show that there exists a matrix B such that AB = BA = E. As B we take the following matrix:

    where A ij is the algebraic complement to the element a ij . Then

    It should be noted that the result will be an identity matrix (it suffices to use Corollaries 1 and 2 from Laplace's theorem § 6), i.e. AB = E. Similarly, it is shown that BA = E. £

    Example. For matrix A, find the inverse matrix, or prove that it does not exist.

    det A = -3 inverse matrix exists. Now we consider algebraic additions.

    A 11 \u003d -3 A 21 \u003d 0 A 31 \u003d 6

    A 12 \u003d 0 A 22 \u003d 0 A 32 \u003d -3

    A 13 \u003d 1 A 23 \u003d -1 A 33 \u003d -1



    So, the inverse matrix looks like: B = =

    Algorithm for finding the inverse matrix for matrix A.

    1. Calculate det A.

    2. If it is equal to 0, then the inverse matrix does not exist. If det A is not equal to 0, we count algebraic additions.

    3. We put the algebraic additions in the appropriate places.

    4. Divide all elements of the resulting matrix by det A.

    Exercise 1. Find out if the inverse matrix is ​​single-valued.

    Exercise 2. Let the elements of the matrix A be rational integers. Will the elements of the inverse matrix be integer rational numbers?

    Systems of linear equations.

    Definition 1. An equation of the form a 1 x 1 + ....+a n x n =b , where a, ... ,a n are numbers; x 1 , ... ,x n - unknown, is called a linear equation with n unknown.

    s equations with n unknown is called the system s linear equations with n unknown, i.e.

    The matrix A, composed of the coefficients of the unknowns of system (1), is called the matrix of system (1).

    .


    If we add a column of free terms to matrix A, then we get the extended matrix of system (1).

    X = - column of unknowns.

    Column of free members.

    In matrix form, the system has the form: AX=B (2).

    The solution of system (1) is the ordered set n numbers (α 1 ,…, α n) such that if we make a substitution in (1) x 1 = α 1 , x 2 = α 2 ,…, x n = α n , then we get numerical identities.

    Definition 2. System (1) is called consistent if it has solutions, and inconsistent otherwise.

    Definition 3. Two systems are said to be equivalent if their solution sets are the same.

    There is a universal way to solve system (1) - the Gauss method (the method of successive elimination of unknowns), see, p.15.

    Let us consider in more detail the case when s = n. There is a Cramer method for solving such systems.

    Let d = det ,

    d j - determinant d, in which the j-th column is replaced by a column of free terms.



    Theorem (Cramer's rule). If the determinant of the system is d ¹ 0, then the system has a unique solution obtained from the formulas:

    x 1 \u003d d 1 / d ... x n \u003d d n / d

    ¢The idea of ​​the proof is to rewrite system (1) in the form of a matrix equation. Let's put

    and consider the equation AX = B (2) with unknown column matrix X. Since A, X, B are matrices of dimensions n x n, n x 1, n x 1 accordingly, the product of rectangular matrices AX is defined and has the same dimensions as the matrix B. Thus, equation (2) makes sense.

    The connection between system (1) and equation (2) is what is the solution to this system if and only if

    the column is the solution of equation (2).

    Indeed, this statement means that the equality

    =

    Because ,

    where A ij is the algebraic complement of the element a ij in the determinant d, then

    = ,

    whence (4).

    In equality (4) in parentheses is the expansion by elements of the j-th column of the determinant d j , which is obtained from the determinant d after the replacement in it

    j-th column by a column of free members. That's why, x j = d j / d.£

    Consequence. If a homogeneous system of n linear equations from n of unknowns has a nonzero solution, then the determinant of this system is equal to zero.

    THEME 3. Polynomials in one variable.