1. Set of polynomials P n (x) degrees not higher n.

2. Lots of n-term sequences (with termwise addition and multiplication by a scalar).

3 . Lots of features C [ a , b ] continuous on [ a, b] and with pointwise addition and multiplication by a scalar.

4. The set of functions defined on [ a, b] and vanishing at some fixed interior point c: f (c) = 0 and with pointwise operations of addition and multiplication by a scalar.

5. The set R + if xyxy, ⊙xx  .

§eight. Subspace definition

Let the set W is a subset of the linear space V (WV) and such that

a)  x, yWxyW;

b)  xW,    ⊙ xW.

The operations of addition and multiplication here are the same as in space V(they are called space-induced V).

Such a multitude W is called a subspace of the space V.

7 . subspace W itself is space.

◀ To prove it, it suffices to prove the existence of a neutral element and an opposite element. Equalities 0⊙ x=  and (–1)⊙ X = –X prove what is necessary.

A subspace consisting only of a neutral element () and a subspace coinciding with the space itself V, are called trivial subspaces of the space V.

§9. Linear combination of vectors. Linear span of a system of vectors

Let the vectors e 1 ,e 2 , …e nV and  1 ,  2 , …  n .

Vector x=  1 e 1 +  2 e 2 + … +  n e n = called linear combination of vectors e 1 , e 2 , … , e n with coefficients  1 ,  2 , …  n .

If all coefficients in a linear combination are zero, then the linear combination called trivial.

Many possible linear combinations of vectors
is called a linear span this system of vectors and is denoted by:

ℒ(e 1 , e 2 , …, e n) = ℒ
.

8 . ℒ(e 1 , e 2 , …, e n

◀ The correctness of addition and multiplication by a scalar follows from the fact that ℒ( e 1 , e 2 , …, e n) is the set of all possible linear combinations. The neutral element is a trivial linear combination. For element X=
the opposite element is x =
. The axioms that the operations must satisfy are also satisfied. Thus,ℒ( e 1 , e 2 , …, e n) is a linear space.

Any linear space contains, in the general case, an infinite number of other linear spaces (subspaces) - linear shells

In the future, we will try to answer the following questions:

When the linear shells different systems vectors consist of the same vectors (i.e. coincide)?

2) What is the minimum number of vectors that defines the same linear span?

3) Is the original space a linear span of some system of vectors?

§ten. Complete systems of vectors

If in space V there is a finite set of vectors
such that,ℒ
V, then the system of vectors
called a complete system V, and the space is said to be finite-dimensional. Thus, the system of vectors e 1 , e 2 , …, e nV is called complete V system, i.e. if

XV   1 ,  2 , …  n such that x=  1 e 1 +  2 e 2 + … +  n e n .

If in space V there is no finite complete system (and a complete system always exists - for example, the set of all space vectors V), then the space V is called infinite.

9 . If a
full in V system of vectors and yV, then ( e 1 , e 2 , …, e n , y) is also a complete system.

◀ Sufficient in linear combinations y take equal to 0.

Let be a system of vectors from a vector space V over the field P.

Definition 2: Linear shell L systems A is the set of all linear combinations of vectors of the system A. Designation L(A).

It can be shown that for any two systems A and B,

A linearly expressed through B if and only if . (one)

A is equivalent to B if and only if L(A)=L(B). (2)

The proof follows from the previous property

3 The linear span of any system of vectors is a subspace of the space V.

Proof

Take any two vectors and L(A), having the following expansions in vectors from A: . Let us check the feasibility of conditions 1) and 2) of the criterion:

Since it is a linear combination of the vectors of the system A.

Since it is also a linear combination of the vectors of the system A.

Consider now the matrix . Linear shell of matrix rows A is called the row space of the matrix and is denoted L r (A). Linear wrapper of matrix columns A is called the column space and is denoted L c (A). Note that for the row and column space of the matrix A are subspaces of different arithmetic spaces P n and Pm respectively. Using statement (2), we can come to the following conclusion:

Theorem 3: If one matrix is ​​obtained from another by a chain of elementary transformations, then the row spaces of such matrices coincide.

Sum and intersection of subspaces

Let L and M- two subspaces of space R.

Amount L+M is called the set of vectors x+y , where x L and y M. Obviously, any linear combination of vectors from L+M belongs L+M, Consequently L+M is a subspace of the space R(may coincide with the space R).

crossing LM subspaces L and M is the set of vectors that simultaneously belong to subspaces L and M(can only consist of a null vector).

Theorem 6.1. Sum of dimensions of arbitrary subspaces L and M finite-dimensional linear space R is equal to the dimension of the sum of these subspaces and the dimension of the intersection of these subspaces:

dim L+dim M=dim(L+M)+dim(L∩M).

Proof. Denote F=L+M and G=L∩M. Let G g-dimensional subspace. We choose a basis in it. Because GL and GM, hence the basis G can be added to the basis L and to the base M. Let the basis of the subspace L and let the basis of the subspace M. Let us show that the vectors

(6.1) form the basis F=L+M. In order for the vectors (6.1) to form the basis of the space F they must be linearly independent and any space vector F can be represented by a linear combination of vectors (6.1).



Let's prove linear independence vectors (6.1). Let the null space vector F is represented by a linear combination of vectors (6.1) with some coefficients:

The left side of (6.3) is the subspace vector L, and the right side is a subspace vector M. Hence the vector

(6.4) belongs to the subspace G=L∩M. On the other hand, the vector v can be represented by a linear combination of the basis vectors of the subspace G:

(6.5) From equations (6.4) and (6.5) we have:

But vectors are the basis of a subspace M, hence they are linearly independent and . Then (6.2) takes the form:

Due to the linear independence of the basis of the subspace L we have:

Since all the coefficients in equation (6.2) turned out to be zero, the vectors

are linearly independent. But any vector z from F(by definition of the sum of subspaces) can be represented by the sum x+y , where x Ly M. In its turn x is represented by a linear combination of vectors a y - a linear combination of vectors . Hence the vectors (6.10) generate the subspace F. We have found that the vectors (6.10) form a basis F=L+M.

Studying the bases of subspaces L and M and subspace basis F=L+M(6.10), we have: dim L=g+l, dim M=g+m, dim (L+M)=g+l+m. Consequently:

dimL+dimM−dim(L∩M)=dim(L+M).

Direct sum of subspaces

Definition 6.2. Space F is a direct sum of subspaces L and M, if each vector x space F can only be represented as a sum x=y+z , where y ∈L and z M.



The direct sum is denoted LM. They say that if F=LM, then F decomposes into a direct sum of its subspaces L and M.

Theorem 6.2. To n-dimensional space R was a direct sum of subspaces L and M, it is enough that the intersection L and M contains only the zero element and that the dimension of R is equal to the sum of the dimensions of the subspaces L and M.

Proof. Let us choose some basis in the subspace L and some basis in the subspace M. Let us prove that

(6.11) is the basis of the space R. By the hypothesis of the theorem, the dimension of the space R n is equal to the sum of subspaces L and M (n=l+m). It suffices to prove the linear independence of the elements (6.11). Let the null space vector R is represented by a linear combination of vectors (6.11) with some coefficients:

(6.13)Since the left side of (6.13) is a subspace vector L, and the right side is the subspace vector M and LM=0 , then

(6.14)But vectors and are bases of subspaces L and M respectively. Hence they are linearly independent. Then

(6.15) We have established that (6.12) is valid only under condition (6.15), and this proves the linear independence of the vectors (6.11). Hence they form a basis in R.

Let x∈R. We expand it in terms of the basis (6.11):

(6.16) From (6.16) we have:

(6.18) From (6.17) and (6.18) it follows that any vector from R can be represented by the sum of vectors x 1 ∈L and x 2 ∈M. It remains to prove that this representation is unique. Let, in addition to the representation (6.17), also have the following representation:

(6.19) Subtracting (6.19) from (6.17), we obtain

(6.20) Since , and LM=0 , then and . Hence and . ■

Theorem 8.4 on the dimension of the sum of subspaces. If and are subspaces of a finite-dimensional linear space , then the dimension of the sum of subspaces is equal to the sum of their dimensions without the dimension of their intersection ( Grassmann's formula):

(8.13)

Indeed, let be the basis of the intersection . Let us supplement it with an ordered set of vectors up to the basis of the subspace and an ordered set of vectors up to the basis of the subspace . Such an addition is possible by Theorem 8.2. From these three sets of vectors, we will compose an ordered set of vectors. Let us show that these vectors are generators of the space . Indeed, any vector of this space can be represented as a linear combination of vectors from the ordered set

Consequently, . Let us prove that the generators are linearly independent and therefore they are the basis of the space . Indeed, let's make a linear combination of these vectors and equate it to the zero vector: . All coefficients of this expansion are zero: subspaces of a vector space with a bilinear form are the set of all vectors orthogonal to each vector from . This set is a vector subspace, which is usually denoted by .

vector(or linear) space- a mathematical structure, which is a set of elements, called vectors, for which the operations of addition to each other and multiplication by a number - a scalar are defined. These operations are subject to eight axioms. Scalars can be elements of a real, complex, or any other number field. A special case of such a space is the usual three-dimensional Euclidean space, whose vectors are used, for example, to represent physical forces. It should be noted that a vector, as an element of a vector space, does not have to be specified as a directed segment. The generalization of the concept of "vector" to an element of a vector space of any nature not only does not cause confusion of terms, but also allows us to understand or even anticipate a number of results that are valid for spaces of an arbitrary nature.

Vector spaces are the subject of study in linear algebra. One of the main characteristics of a vector space is its dimension. Dimension is maximum number linearly independent elements of space, that is, resorting to a rough geometric interpretation, the number of directions inexpressible through each other through only the operations of addition and multiplication by a scalar. The vector space can be endowed with additional structures, such as the norm or the dot product. Such spaces appear naturally in calculus, predominantly as infinite-dimensional function spaces (English), where the vectors are the functions . Many problems in analysis require finding out whether a sequence of vectors converges to a given vector. Consideration of such questions is possible in vector spaces with additional structure, in most cases a suitable topology, which allows one to define the concepts of proximity and continuity. Such topological vector spaces, in particular Banach and Hilbert spaces, allow for deeper study.

The first works that anticipated the introduction of the concept of a vector space date back to the 17th century. It was then that analytical geometry, the doctrine of matrices, systems of linear equations, and Euclidean vectors received their development.

Definition

Linear or vector space V (F) (\displaystyle V\left(F\right)) over the field F (\displaystyle F) is an ordered quadruple (V , F , + , ⋅) (\displaystyle (V,F,+,\cdot)), where

  • V (\displaystyle V)- a non-empty set of elements of an arbitrary nature, which are called vectors;
  • F (\displaystyle F)- a field whose elements are called scalars;
  • Operation defined additions vectors V × V → V (\displaystyle V\times V\to V), matching each pair of elements x , y (\displaystyle \mathbf (x) ,\mathbf (y) ) sets V (\displaystyle V) V (\displaystyle V) calling them sum and denoted x + y (\displaystyle \mathbf (x) +\mathbf (y) );
  • Operation defined multiplication of vectors by scalars F × V → V (\displaystyle F\times V\to V), which matches each element λ (\displaystyle \lambda ) fields F (\displaystyle F) and each element x (\displaystyle \mathbf (x) ) sets V (\displaystyle V) the only element of the set V (\displaystyle V), denoted λ ⋅ x (\displaystyle \lambda \cdot \mathbf (x) ) or λ x (\displaystyle \lambda \mathbf (x) );

Vector spaces defined on the same set of elements but over different fields will be different vector spaces (for example, the set of pairs real numbers R 2 (\displaystyle \mathbb (R) ^(2)) can be a two-dimensional vector space over the field of real numbers or one-dimensional - over the field of complex numbers).

The simplest properties

  1. The vector space is an abelian group by addition.
  2. neutral element 0 ∈ V (\displaystyle \mathbf (0) \in V)
  3. 0 ⋅ x = 0 (\displaystyle 0\cdot \mathbf (x) =\mathbf (0) ) for anyone .
  4. For anyone x ∈ V (\displaystyle \mathbf (x) \in V) opposite element − x ∈ V (\displaystyle -\mathbf (x) \in V) is the only one that follows from group properties.
  5. 1 ⋅ x = x (\displaystyle 1\cdot \mathbf (x) =\mathbf (x) ) for anyone x ∈ V (\displaystyle \mathbf (x) \in V).
  6. (− α) ⋅ x = α ⋅ (− x) = − (α x) (\displaystyle (-\alpha)\cdot \mathbf (x) =\alpha \cdot (-\mathbf (x))=-( \alpha \mathbf (x))) for any and x ∈ V (\displaystyle \mathbf (x) \in V).
  7. α ⋅ 0 = 0 (\displaystyle \alpha \cdot \mathbf (0) =\mathbf (0) ) for anyone α ∈ F (\displaystyle \alpha \in F).

Related definitions and properties

subspace

Algebraic definition: Linear subspace or vector subspace is a non-empty subset K (\displaystyle K) linear space V (\displaystyle V) such that K (\displaystyle K) is itself a linear space with respect to those defined in V (\displaystyle V) the operations of addition and multiplication by a scalar. The set of all subspaces is usually denoted as L a t (V) (\displaystyle \mathrm (Lat) (V)). For a subset to be a subspace, it is necessary and sufficient that

The last two statements are equivalent to the following:

For any vectors x , y ∈ K (\displaystyle \mathbf (x) ,\mathbf (y) \in K) vector α x + β y (\displaystyle \alpha \mathbf (x) +\beta \mathbf (y) ) also belonged K (\displaystyle K) for any α , β ∈ F (\displaystyle \alpha ,\beta \in F).

In particular, a vector space consisting of only one zero vector is a subspace of any space; any space is a subspace of itself. Subspaces that do not coincide with these two are called own or non-trivial.

Subspace Properties

Linear Combinations

End sum of the view

α 1 x 1 + α 2 x 2 + … + α n x n (\displaystyle \alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf (x) _(2)+\ ldots +\alpha _(n)\mathbf (x) _(n))

The linear combination is called:

Basis. Dimension

Vectors x 1 , x 2 , … , x n (\displaystyle \mathbf (x) _(1),\mathbf (x) _(2),\ldots ,\mathbf (x) _(n)) called linearly dependent, if there exists a non-trivial linear combination of them, the value of which is equal to zero; that is

α 1 x 1 + α 2 x 2 + … + α n x n = 0 (\displaystyle \alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf (x) _(2) +\ldots +\alpha _(n)\mathbf (x) _(n)=\mathbf (0) )

with some coefficients α 1 , α 2 , … , α n ∈ F , (\displaystyle \alpha _(1),\alpha _(2),\ldots ,\alpha _(n)\in F,) and at least one of the coefficients α i (\displaystyle \alpha _(i)) different from zero.

Otherwise, these vectors are called linearly independent.

This definition allows the following generalization: an infinite set of vectors from V (\displaystyle V) called linearly dependent, if some final its subset, and linearly independent, if any final subset is linearly independent.

Basis properties:

x = α 1 x 1 + α 2 x 2 + … + α n x n (\displaystyle \mathbf (x) =\alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf ( x) _(2)+\ldots +\alpha _(n)\mathbf (x) _(n)).

Linear shell

Linear shell subsets X (\displaystyle X) linear space V (\displaystyle V)- intersection of all subspaces V (\displaystyle V) containing X (\displaystyle X).

Linear shell is a subspace V (\displaystyle V).

Linear shell is also called subspace generated X (\displaystyle X). It is also said that the linear span V (X) (\displaystyle (\mathcal (V))(X))- space, stretched over lots of X (\displaystyle X).

The article describes the basics of linear algebra: linear space, its properties, the concept of a basis, the dimensions of space, the linear span, the relationship between linear spaces and the rank of matrices.

linear space

Lots of L called linear space, if for all its elements the operations of adding two elements and multiplying an element by a number satisfying I group Weyl's axioms. The elements of a linear space are called vectors. This is the complete definition; more briefly, we can say that a linear space is a set of elements for which the operations of adding two elements and multiplying an element by a number are defined.

Weyl's axioms.

Herman Weil suggested that in geometry we have two types of objects ( vectors and points), whose properties are described by the following axioms, which were the basis of the section linear algebra. Axioms can be conveniently divided into 3 groups.

Group I

  1. for any vectors x and y the equality x+y=y+x is satisfied;
  2. for any vectors x, y, and z, x+(y+z)=(x+y)+z;
  3. there is a vector o such that for any vector x the equality x + o = x is true;
  4. for any vector X there is a vector (-x) such that x+(-x)=o;
  5. for any vector X the equality 1x=x takes place;
  6. for any vectors X and at and any number λ, the equality λ( X+at)=λ Xat;
  7. for any vector X and any numbers λ and μ we have the equality (λ+μ) XXX;
  8. for any vector X and any numbers λ and μ, the equality λ(μ X)=(λμ) X;

Group II

Group I defines the concept linear combination of vectors, linear dependence and linear independence. This allows us to formulate two more axioms:

  1. there are n linearly independent vectors;
  2. any (n+1) vectors are linearly dependent.

For planimetry n=2, for stereometry n=3.

Group III

This group assumes that there is a scalar multiplication operation that associates a pair of vectors X and at number ( x,y). Wherein:

  1. for any vectors X and at equality holds ( x,y)=(y, x);
  2. for any vectors X , at and z equality holds ( x+y,z)=(x,z)+(y,z);
  3. for any vectors X and at and any number λ, the equality (λ x,y)=λ( x,y);
  4. for any vector x, the inequality ( x, x)≥0, and ( x, x)=0 if and only if X=0.

Linear space properties

For the most part, the properties of a linear space are based on Weyl's axioms:

  1. Vector about, whose existence is guaranteed by Axiom 3, is uniquely defined;
  2. Vector(- X), whose existence is guaranteed by Axiom 4, is uniquely defined;
  3. For any two vectors a and b belonging to the space L, exists single vector X, also belonging to the space L, which is a solution to the equation a+x=b and called the vector difference b-a.

Definition. Subset L' linear space L called linear subspace space L, if it is itself a linear space in which the sum of vectors and the product of a vector by a number are defined in the same way as in L.

Definition. Linear shell L(x1, x2, x3, …, xk) vectors x1, x2, x3, and xk is the set of all linear combinations of these vectors. About the linear span, we can say that

-the linear span is a linear subspace;

– the linear span is the minimal linear subspace containing the vectors x1, x2, x3, and xk.

Definition. A linear space is called n-dimensional if it satisfies Group II of the system of Weyl's axioms. The number n is called dimension linear space and write dimL=n.

Basis is any ordered system n linearly independent vectors of the space . The meaning of the basis is such that the vectors that make up the basis can be used to describe any vector in the space .

Theorem. Any n linearly independent vectors in the space L form a basis.

Isomorphism.

Definition. Linear spaces L and L' are called isomorphic if such a one-to-one correspondence can be established between their elements x↔x', what:

  1. if x↔x', y↔y', then x+y↔x'+y';
  2. if x↔x', then λ x↔λ X'.

This correspondence is called isomorphism. Isomorphism allows us to make the following assertions:

  • if two spaces are isomorphic, then their dimensions are equal;
  • any two linear spaces over the same field and of the same dimension are isomorphic.