Lemma 1 : If in a matrix of size n n at least one row (column) is equal to zero, then the rows (columns) of the matrix are linearly dependent.

Proof: Let the first row be null, then

where a 1 0. Which is what was required.

Definition: A matrix whose elements below the main diagonal are equal to zero is called triangular:

and ij = 0, i>j.

Lemma 2: The determinant of a triangular matrix is ​​equal to the product of the elements of the main diagonal.

The proof is easy to carry out by induction on the dimension of the matrix.

Theorem about linear independence vectors.

a)Need: linearly dependent D=0 .

Proof: Let linearly dependent, j=,

that is, there exists a j , not all equal to zero, j= , what a 1 A 1 + a 2 A 2 + ... a n A n = , A j - matrix columns BUT. Let, for example, a n ¹0.

We have a j * = a j / a n , j £ n-1a 1 * A 1 + a 2 * A 2 + ... a n -1 * A n -1 + A n = .

Let's replace the last column of the matrix BUT on the

A n * \u003d a 1 * A 1 + a 2 * A 2 + ... a n -1 A n -1 + A n \u003d.

According to the property of the determinant proved above (it does not change if another column is added to any column in the matrix, multiplied by a number), the determinant of the new matrix is ​​equal to the determinant of the original one. But in the new matrix, one column is zero, which means that expanding the determinant in this column, we get D=0, Q.E.D.

b)Adequacy: size matrix n nwith linearly independent rows it is always possible to reduce to a triangular form with the help of transformations that do not change the absolute value of the determinant. In this case, the independence of the rows of the original matrix implies that its determinant is not equal to zero.

1. If in the size matrix n n with linearly independent rows element a 11 is equal to zero, then the column with the element and 1 j ¹ 0. According to Lemma 1, such an element exists. In this case, the determinant of the transformed matrix may differ from the determinant of the original matrix only in sign.

2. From lines with numbers i>1 subtract the first row multiplied by the fraction a i 1 / a 11. At the same time, in the first column of rows with numbers i>1 null elements will be obtained.

3. Let's start calculating the determinant of the resulting matrix by expanding it in the first column. Since all elements in it, except for the first one, are equal to zero,

D new = a 11 new (-1) 1+1 D 11 new,

where d 11 new is the determinant of a smaller matrix.

Next, to calculate the determinant D11 repeat steps 1, 2, 3 until the last determinant is the determinant of the size matrix 1 1. Since item 1 only changes the sign of the determinant of the matrix to be transformed, and item 2 does not change the value of the determinant at all, then, up to a sign, we will eventually obtain the determinant of the original matrix. In this case, since, due to the linear independence of the rows of the original matrix, item 1 is always feasible, all elements of the main diagonal will turn out to be non-zero. Thus, the final determinant according to the above algorithm is equal to the product of non-zero elements on the main diagonal. Therefore, the determinant of the original matrix is ​​not equal to zero. Q.E.D.


Annex 2

The following give several criteria for linear dependence and, accordingly, linear independence of systems of vectors.

Theorem. (A necessary and sufficient condition for the linear dependence of vectors.)

A system of vectors is dependent if and only if one of the vectors of the system is linearly expressed in terms of the others of this system.

Proof. Need. Let the system be linearly dependent. Then, by definition, it represents the null vector in a non-trivial way, i.e. there is a non-trivial combination of this system of vectors equal to the zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let , .

Divide both parts of the previous equality by this non-zero coefficient (i.e. multiply by:

Denote: , where .

those. one of the vectors of the system is linearly expressed in terms of the others of this system, etc.

Adequacy. Let one of the vectors of the system be linearly expressed in terms of other vectors of this system:

Let's move the vector to the right of this equality:

Since the coefficient of the vector is , then we have a non-trivial representation of zero by the system of vectors , which means that this system of vectors is linearly dependent, etc.

The theorem has been proven.

Consequence.

1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

Proof.

1) Necessity. Let the system be linearly independent. Assume the opposite and there is a system vector that is linearly expressed through other vectors of this system. Then, by the theorem, the system is linearly dependent, and we arrive at a contradiction.

Adequacy. Let none of the vectors of the system be expressed in terms of others. Let's assume the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a system vector that is linearly expressed through other vectors of this system, and we again come to a contradiction.

2a) Let the system contain a zero vector. Assume for definiteness that the vector :. Then the equality

those. one of the vectors of the system is linearly expressed in terms of the other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, so on.

Note that this fact can be proved directly from a linearly dependent system of vectors.

Since , the following equality is obvious

This is a non-trivial representation of the zero vector, which means that the system is linearly dependent.

2b) Let the system have two equal vectors. Let for . Then the equality

Those. the first vector is linearly expressed in terms of the other vectors of the same system. It follows from the theorem that this system linearly dependent, etc.

Similarly to the previous one, this assertion can also be proved directly from the definition of a linearly dependent system. Then this system represents the zero vector nontrivially

whence follows the linear dependence of the system .

The theorem has been proven.

Consequence. A system consisting of one vector is linearly independent if and only if this vector is nonzero.

The functions are called linearly independent, if

(only a trivial linear combination of functions is allowed, which is identically equal to zero). In contrast to the linear independence of vectors, here the identity of the linear combination is zero, and not equality. This is understandable, since the equality of the linear combination to zero must be satisfied for any value of the argument.

The functions are called linearly dependent, if there exists a non-zero set of constants (not all constants are equal to zero) such that (there exists a non-trivial linear combination of functions that is identically equal to zero).

Theorem.In order for the functions to be linearly dependent, it is necessary and sufficient that any of them be linearly expressed in terms of the others (represented as their linear combination).

Prove this theorem yourself, it is proved in the same way as the similar theorem on the linear dependence of vectors.

Vronsky's determinant.

The Wronsky determinant for functions is introduced as a determinant whose columns are the derivatives of these functions from zero (the functions themselves) to the n-1st order.

.

Theorem. If functions linearly dependent, then

Proof. Since the functions are linearly dependent, then one of them is linearly expressed in terms of the rest, for example,

The identity can be differentiated, so

Then the first column of the Wronsky determinant is linearly expressed in terms of the remaining columns, so the Wronsky determinant is identically equal to zero.

Theorem.In order to solve the linear homogeneous differential equation nth order are linearly dependent, it is necessary and sufficient that.

Proof. Necessity follows from the previous theorem.

Adequacy. Let's fix some point . Since , then the columns of the determinant calculated at this point are linearly dependent vectors.

, that the relations

Since a linear combination of solutions of a linear homogeneous equation is its solution, then we can introduce a solution of the form

A linear combination of solutions with the same coefficients.

Note that for this solution satisfies zero initial conditions, this follows from the system of equations written above. But the trivial solution of a linear homogeneous equation also satisfies the same zero initial conditions. Therefore, it follows from the Cauchy theorem that the introduced solution is identically equal to the trivial one, therefore,

so the solutions are linearly dependent.

Consequence.If the Wronsky determinant, built on solutions of a linear homogeneous equation, vanishes at least at one point, then it is identically equal to zero.

Proof. If , then the solutions are linearly dependent, therefore, .

Theorem.1. For linear dependence of solutions it is necessary and sufficient(or ).

2. For linear independence of solutions it is necessary and sufficient .

Proof. The first assertion follows from the theorem proved above and the corollary. The second assertion is easily proved by contradiction.

Let the solutions be linearly independent. If , then the solutions are linearly dependent. Contradiction. Consequently, .

Let . If the solutions are linearly dependent, then , hence , a contradiction. Therefore, the solutions are linearly independent.

Consequence.The vanishing of the Wronsky determinant at least at one point is a criterion for the linear dependence of the solutions of a linear homogeneous equation.

The difference of the Wronsky determinant from zero is a criterion for the linear independence of solutions of a linear homogeneous equation.

Theorem.The dimension of the space of solutions of a linear homogeneous equation of the nth order is equal to n.

Proof.

a) Let us show that there are n linearly independent solutions of a linear homogeneous differential equation of the nth order. Consider Solutions , satisfying the following initial conditions:

...........................................................

Such solutions exist. Indeed, by the Cauchy theorem through the point passes the only integral curve - the solution. Through the dot passes the solution through the point

- solution , through a dot - solution .

These solutions are linearly independent, since .

b) Let us show that any solution of a linear homogeneous equation is linearly expressed in terms of these solutions (is their linear combination).

Let's consider two solutions. One - arbitrary solution with initial conditions . Fair ratio


The concepts of linear dependence and independence of a system of vectors are very important in the study of vector algebra, since the concepts of dimension and space basis are based on them. In this article, we will give definitions, consider the properties of linear dependence and independence, and obtain an algorithm for studying a system of vectors on linear dependence Let's take a look at the examples in detail.

Page navigation.

Determination of linear dependence and linear independence of a system of vectors.

Consider a set of p n-dimensional vectors , denote them as follows. Compose a linear combination of these vectors and arbitrary numbers (real or complex): . Based on the definition of operations on n-dimensional vectors, as well as the properties of the operations of adding vectors and multiplying a vector by a number, it can be argued that the recorded linear combination is some n-dimensional vector , that is, .

So we came to the definition of the linear dependence of the system of vectors.

Definition.

If a linear combination can be a zero vector when among the numbers there is at least one other than zero, then the system of vectors is called linearly dependent.

Definition.

If the linear combination is a null vector only when all numbers are equal to zero, then the system of vectors is called linearly independent.

Properties of linear dependence and independence.

Based on these definitions, we formulate and prove properties of linear dependence and linear independence of a system of vectors.

    If several vectors are added to a linearly dependent system of vectors, then the resulting system will be linearly dependent.

    Proof.

    Since the system of vectors is linearly dependent, equality is possible if there is at least one non-zero number from the numbers . Let .

    Let's add s more vectors to the original system of vectors , and we get the system . Since and , then the linear combination of vectors of this system of the form

    is a null vector, and . Therefore, the resulting system of vectors is linearly dependent.

    If several vectors are excluded from a linearly independent system of vectors, then the resulting system will be linearly independent.

    Proof.

    We assume that the resulting system is linearly dependent. Adding all the discarded vectors to this system of vectors, we get the original system of vectors. By condition, it is linearly independent, and due to the previous property of linear dependence, it must be linearly dependent. We have arrived at a contradiction, hence our assumption is wrong.

    If a system of vectors has at least one zero vector, then such a system is linearly dependent.

    Proof.

    Let the vector in this system of vectors be zero. Assume that the original system of vectors is linearly independent. Then vector equality is possible only when . However, if we take any non-zero, then the equality will still be valid, since . Therefore, our assumption is wrong, and the original system of vectors is linearly dependent.

    If a system of vectors is linearly dependent, then at least one of its vectors is linearly expressed in terms of the others. If the system of vectors is linearly independent, then none of the vectors can be expressed in terms of the others.

    Proof.

    Let us first prove the first assertion.

    Let the system of vectors be linearly dependent, then there is at least one non-zero number and the equality is true. This equality can be resolved with respect to , since , in this case, we have

    Consequently, the vector is linearly expressed in terms of the remaining vectors of the system, which was to be proved.

    Now we prove the second assertion.

    Since the system of vectors is linearly independent, equality is possible only for .

    Suppose that some vector of the system is expressed linearly in terms of the others. Let this vector be , then . This equality can be rewritten as , on its left side there is a linear combination of the vectors of the system, and the coefficient in front of the vector is non-zero, which indicates a linear dependence of the original system of vectors. So we have come to a contradiction, which means that the property is proved.

An important statement follows from the last two properties:
if the system of vectors contains vectors and , where is an arbitrary number, then it is linearly dependent.

Study of the system of vectors for linear dependence.

Let's set the task: we need to establish a linear dependence or linear independence of the system of vectors .

The logical question is: “how to solve it?”

Something useful from a practical point of view can be derived from the above definitions and properties of linear dependence and independence of a system of vectors. These definitions and properties allow us to establish a linear dependence of a system of vectors in the following cases:

What about in other cases, which are the majority?

Let's deal with this.

Recall the formulation of the theorem on the rank of a matrix, which we cited in the article.

Theorem.

Let r is the rank of matrix A of order p by n , . Let M be the basic minor of the matrix A . All rows (all columns) of the matrix A that do not participate in the formation of the basis minor M are linearly expressed in terms of the rows (columns) of the matrix that generate the basis minor M .

And now let us explain the connection of the theorem on the rank of a matrix with the study of a system of vectors for a linear dependence.

Let's make a matrix A, the rows of which will be the vectors of the system under study:

What will the linear independence of the system of vectors mean?

From the fourth property of the linear independence of a system of vectors, we know that none of the vectors of the system can be expressed in terms of the others. In other words, no row of the matrix A will be linearly expressed in terms of other rows, therefore, linear independence of the system of vectors will be equivalent to the condition Rank(A)=p.

What will the linear dependence of the system of vectors mean?

Everything is very simple: at least one row of the matrix A will be linearly expressed in terms of the rest, therefore, linear dependence of the system of vectors will be equivalent to the condition Rank(A)

.

So, the problem of studying a system of vectors for a linear dependence is reduced to the problem of finding the rank of a matrix composed of the vectors of this system.

It should be noted that for p>n the system of vectors will be linearly dependent.

Comment: when compiling matrix A, the system vectors can be taken not as rows, but as columns.

Algorithm for studying a system of vectors for a linear dependence.

Let's analyze the algorithm with examples.

Examples of studying a system of vectors for linear dependence.

Example.

Given a system of vectors . Examine it for a linear relationship.

Solution.

Since the vector c is zero, the original system of vectors is linearly dependent due to the third property.

Answer:

The system of vectors is linearly dependent.

Example.

Examine the system of vectors for linear dependence.

Solution.

It is not difficult to see that the coordinates of the vector c are equal to the corresponding coordinates of the vector multiplied by 3, that is, . Therefore, the original system of vectors is linearly dependent.

Definition 1. A system of vectors is called linearly dependent if one of the system's vectors can be represented as a linear combination of the rest of the system's vectors, and linearly independent otherwise.

Definition 1´. A system of vectors is called linearly dependent if there are numbers With 1 , With 2 , …, With k , not all equal to zero, such that the linear combination of vectors with given coefficients is equal to the zero vector: = , otherwise the system is called linearly independent.

Let us show that these definitions are equivalent.

Let Definition 1 be satisfied, i.e., one of the vectors of the system is equal to a linear combination of the rest:

A linear combination of a system of vectors is equal to a zero vector, and not all coefficients of this combination are equal to zero, i.e. definition 1´ holds.

Let Definition 1´ be satisfied. The linear combination of the system of vectors is , and not all coefficients of the combination are equal to zero, for example, the coefficients of the vector .

We presented one of the vectors of the system as a linear combination of the rest, i.e. definition 1 is fulfilled.

Definition 2. The unit vector, or ort, is called n-dimensional vector, which one i The th coordinate is equal to one, and the rest are zero.

. (1, 0, 0, …, 0),

(0, 1, 0, …, 0),

(0, 0, 0, …, 1).

Theorem 1. Various unit vectors n-dimensional space are linearly independent.

Proof. Let the linear combination of these vectors with arbitrary coefficients be equal to the zero vector.

It follows from this equality that all coefficients are equal to zero. We got a contradiction.

Each vector n-dimensional space ā (a 1 , a 2 , ..., a n ) can be represented as a linear combination of unit vectors with coefficients equal to the coordinates of the vector

Theorem 2. If the system of vectors contains a zero vector, then it is linearly dependent.

Proof. Let a system of vectors be given and one of the vectors be zero, for example = . Then, with the vectors of this system, it is possible to compose a linear combination equal to the zero vector, and not all coefficients will be zero:

Therefore, the system is linearly dependent.

Theorem 3. If some subsystem of a system of vectors is linearly dependent, then the entire system is linearly dependent.

Proof. Given a system of vectors . Let us assume that the system is linearly dependent, i.e. there are numbers With 1 , With 2 , …, With r , not all equal to zero, such that = . Then

It turned out that the linear combination of the vectors of the entire system is equal, and not all coefficients of this combination are equal to zero. Therefore, the system of vectors is linearly dependent.

Consequence. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

Proof.

Assume the opposite, i.e. some subsystem is linearly dependent. It follows from the theorem that the entire system is linearly dependent. We have come to a contradiction.

Theorem 4 (Steinitz's theorem). If each of the vectors is a linear combination of the vectors and m>n, then the system of vectors is linearly dependent.

Consequence. In any system of n -dimensional vectors, there cannot be more than n linearly independent ones.

Proof. Each n-dimensional vector is expressed as a linear combination of n unit vectors. Therefore, if the system contains m vectors and m>n, then, by the theorem, this system is linearly dependent.