To the base (e 1, ..., e n)module Relative to the form f - such a basis (c 1, ..., c n) module E, What

where E is a free K-module over a commutative ring K with identity, and f is non-singular on E.

Let E*-module interfaced to E, a (e* 1 ,. . ., f*n) - basis E*, conjugate to the original basis E: e i* (e i)=1, e i* (e i) = 0, . Then to each bilinear form f on E there corresponds maps j f, y f: defined by equalities

If f is non-singular, then each of the mappings , j f y f is an isomorphism, and vice versa. Moreover, the dual to ( e 1, ..., e p)basis ( c i . . ., with p) is characterized by the property that

E. N. Kuzmin.


Mathematical encyclopedia. - M.: Soviet Encyclopedia. I. M. Vinogradov. 1977-1985.

See what “DUAL BASIS” is in other dictionaries:

    Of a set X is the minimal subset B that generates it. Generation means that by applying operations of a certain class to elements, any element is obtained. This concept is related to the concept of dependence: elements of X are put into ... ... Mathematical Encyclopedia

    In mathematics, the Casimir invariant, or Casimir operator, is a notable element of the center of the universal enveloping algebra of a Lie algebra. An example is the square of the angular momentum operator, which is the Casimir invariant of the 3-dimensional group... ... Wikipedia

    Or the dual space is the space of linear functionals on a given linear space. Contents 1 Linearly conjugate space definition 2 Properties ... Wikipedia

    Not to be confused with the “simplex method”, a method for optimizing an arbitrary function. See Nelder Mead Method Simplex method is an algorithm for solving an optimization problem of linear programming by enumerating the vertices of a convex polyhedron in... ... Wikipedia

    Planar graph, a graph that admits regular laying on a plane (see Graph laying). In other words, the graph G is called flat if it can be depicted on a plane in such a way that the vertices correspond to different points on the plane, and the lines... ... Mathematical Encyclopedia

    Biography. Marx's teachings. Philosophical materialism. Dialectics. Materialistic understanding of history. Class struggle. Economic teachings of Marx. Price. Surplus value. Socialism. Tactics of the class struggle of the proletariat... Literary encyclopedia

    Differential algebraic method for studying systems of differential equations and varieties with different structures. Algebraic The method is based on Grassmann algebra. Let V be a 2n-dimensional vector space over an arbitrary... ... Mathematical Encyclopedia

    Jacobian, an algebraic curve S is the principally polarized Abelian variety associated with this curve. Sometimes a Yam is simply a commutative algebraic. group. If S is a smooth projective curve of genus. over field C or, in the classic... ... Mathematical Encyclopedia

    RELATIONS SOCIAL relations, including as their elements: 1) subjects with their statuses and roles, values ​​and norms, needs and interests, incentives and motives; 2) the content of the activities of subjects and their interactions,... ... Philosophical Encyclopedia

    Contents: I. R. Modern; II. History of the city of R.; III. Roman history before the fall of the Western Roman Empire; IV. Roman law. I. Rome (Roma) the capital of the Italian kingdom, on the Tiber River, in the so-called Roman Campania, at 41°53 54 north... ... Encyclopedic Dictionary F.A. Brockhaus and I.A. Efron

Dual basis

Often a tensor is represented as a multidimensional table (where d- the dimension of the vector space over which the tensor is given, and the number of factors coincides with the “valency of the tensor”), filled with numbers ( tensor components).

Such a representation (with the exception of zero valence tensors - scalars) is possible only after choosing a basis (or coordinate system); when changing the basis, the components of the tensor change in a certain way. Moreover, the tensor itself as a “geometric entity” does not depend on the choice of basis. This can be seen in the example of a vector, which is a special case of a tensor: the components of the vector change when the coordinate axes change, but the vector itself - a visual image of which can simply be a drawn arrow - does not change.

The term "tensor" is also often shortened for the term "tensor field", which is the study of tensor calculus.

Definitions

Modern definition

Rank tensor ( n,m) above d-dimensional vector space V there is a tensor product element m spaces V And n conjugate spaces V* (that is, spaces of linear functionals (1-forms) on V )

Sum of numbers n + m called valency tensor (also often called rank). Rank tensor ( n,m) also called n once covariant And m once contravariant.

N.B. often the term rank used as a synonym for the term defined here valence. The opposite also happens, that is, the use of the term valence in meaning rank, defined here.

Tensor as a polylinear function

Just as a covariant tensor of rank (1,0) can be represented as a linear functional, a tensor τ of rank ( n,0) it is convenient to think of as a function of n vector arguments, which is linear in each argument v i(such functions are called multilinear), that is, for any constant c from the field F(over which a vector space is defined)

In the same vein, a tensor τ of arbitrary rank ( n,m) is represented by a multilinear functional of n vectors and m covectors:

Tensor components

Let's choose in space V basis, and accordingly - dual basis in conjugate space V* (that is , where is the Kronecker symbol).

Then in the tensor product of spaces a basis arises naturally

.

If we define a tensor as a multilinear function, then its components are determined by the values ​​of this function on the basis:

After this, the tensor can be written as a linear combination of basis tensor products:

Subscripts component the tensors are called covariant, and the upper ones are called contravariant. For example, the expansion of some doubly covariant tensor h will be like this:

About the classical definition

The classical approach to defining a tensor, more common in the physics literature, begins by representing tensors in components. A tensor is defined as a geometric object that is described by a multidimensional array, that is, a set of numbers indexed by several indices, or, in other words, a table (generally speaking, n-dimensional, where n - valence tensor (see above)).

The main tensor operations are addition, which in this approach is reduced to component-wise addition, similar to vectors, and convolution - with vectors, among themselves and with themselves, generalizing matrix multiplication, the scalar product of vectors and taking the trace of a matrix. Multiplication of a tensor by a number (by a scalar) can, if desired, be considered a special case of convolution; it reduces to component-wise multiplication.

Number values ​​in an array, or tensor components, depend on the coordinate system, but at the same time the tensor itself, like geometric entity, does not depend on them. Manifestations of this geometric essence can be understood as many things: various scalar invariants, symmetry/antisymmetry of indices, relationships between tensors, and more. For example, the scalar product and the length of vectors do not change when the axes are rotated, and the metric tensor always remains symmetrical. Convolutions of any tensors with themselves and/or other tensors (including vectors), if as a result there is no index left, are scalars, that is, invariants under a change of coordinates: this is a general way of constructing scalar invariants.

When replacing the coordinate system, the tensor components are transformed according to a certain linear law.

Knowing the components of a tensor in one coordinate system, you can always calculate its components in another if a coordinate transformation matrix is ​​given. Thus, the second approach can be summarized as a formula:

tensor = array of components + law of transformation of components when replacing the basis

It should be noted that this implies that all tensors (all tensors over one vector space), regardless of their rank (that is, vectors included), are transformed through the same coordinate transformation matrix (and its dual, if there is superscripts and subscripts). The components of the tensor are thus transformed according to the same law as the corresponding components of the tensor product of vectors (in an amount equal to the valency of the tensor), taking into account the covariance-contravariance of the components.

For example, tensor components

is transformed in the same way as the components of the tensor product of three vectors, that is, as the product of the components of these vectors

Since the transformation of vector components is known, in this way one can easily formulate the simplest version of the classical definition of a tensor.

Examples

As follows from the definition, the components of the tensor must change in a certain way synchronously with the components of the vectors of the space on which it is defined when transforming coordinates. That's why Not any table or value with indices that looks like it represents a tensor actually represents a tensor.

  • A simple, although generally somewhat artificial, example of such a tablet is Not representing a tensor can be a table whose components represent a set of arbitrary numbers that do not change in any way under arbitrary coordinate transformations. Such an object does not represent a tensor, or, in any case, does not represent a tensor on the linear space in which the coordinate transformation took place. Thus, a set of three numbers does not represent a three-dimensional vector unless these numbers are transformed when replacing coordinates in a very specific way.
  • Also in the general case, the subset of components of a tensor of highest rank Not is a tensor of lowest rank.
  • Not A tensor also represents an object all of whose components are zero in at least one non-degenerate coordinate system (in the full basis), while in another at least one component is non-zero. This fact is a consequence of the (poly-)linearity of tensors.

There are objects that are not only similar to tensors, but for which tensor operations are defined (and have a reasonable and correct meaning) (convolution with other tensors, in particular, with vectors), but at the same time that are not tensors:

  • First of all, the matrices themselves (Jacobi matrices) of the coordinate transformation, which is a special case of a diffeomorphism between two manifolds, with the help of which the classical definition of a tensor is introduced, although in many of their properties they resemble a tensor, do not belong to tensors. For them, you can also enter upper and lower indices, multiplication, addition and convolution operations. However, unlike the tensor, whose components depend only on the coordinates on a given manifold, the components of the Jacobian matrix also depend on the coordinates on the image manifold. This difference is obvious in the case when the Jacobian matrices of a diffeomorphism of two arbitrary manifolds are considered, but when mapping a manifold into itself, it can be overlooked, since the tangent spaces of the image and preimage are isomorphic (not canonical). However, it persists. The analogy between Jacobi matrices and tensors can be developed if we consider arbitrary vector bundles over a manifold and their products, and not just the tangent and cotangent bundles.

Tensor operations

Tensors allow the following algebraic operations:

  • Multiplication by a scalar - like a vector or scalar (special cases of a tensor);
  • Addition of tensors of the same valence and composition of indices (the sum can be calculated component by component, as for vectors);
    • The presence of multiplication by a scalar and addition of tensors makes the space of tensors of the same type a linear space.
The components of a tensor product are products of the corresponding components of the factors, for example:

Symmetries

In various applications, tensors often arise with a certain symmetry property.

A tensor that satisfies the following requirement is called symmetric with respect to two co-(contra-)variant indices:

or in components

Linear operators of quantum mechanics, of course, can also be interpreted as tensors over certain abstract spaces (state spaces), but this is the traditional use of the term tensor is practically not used, just as it is generally extremely rarely used to describe linear operators over infinite-dimensional spaces. In general, in physics the term tensor tends to apply only to tensors over ordinary physical 3-dimensional space or 4-dimensional spacetime, or, at most, over the simplest and most direct generalizations of these spaces, although the in-principle possibility of applying it in more general cases is no secret.

Examples of tensors in physics are:

  • a metric tensor over a pseudo-Riemannian 4-dimensional manifold, which in general relativity is a development of the concept of Newtonian gravitational potential.
  • the Riemannian curvature tensor expressed through it and its convolutions, associated in the same theory with the energy of the gravitational field and directly included in the main equation of the theory.
  • the electromagnetic field tensor over Minkowski space, containing the electric and magnetic field strengths and being the main object of classical electrodynamics in 4-dimensional notation. In particular, Maxwell's equations are written using it in the form of a single 4-dimensional equation.
  • stresses and deformations in the theory of elasticity are described by tensors over 3-dimensional Euclidean space. The same applies to quantities such as elastic moduli.
  • Almost most of the quantities that are scalar characteristics of a substance in the case of isotropy of the latter are tensors in the case of an anisotropic substance. More specifically, it refers to substantial coefficients relating vector quantities or standing before products (in particular, squares) of vectors. Examples include electrical conductivity (also its inverse resistivity), thermal conductivity, dielectric susceptibility and permittivity, speed of sound (depending on direction), etc.
  • in rigid body mechanics, the most important role is played by the inertia tensor, which connects angular velocity with angular momentum and kinetic energy of rotation. This tensor differs from most other tensors in physics, which are, generally speaking, tensor fields, in that one tensor characterizes one absolutely rigid body, completely determining, along with mass, its inertia.
  • tensors included in the multipole expansion have a similar property: just one tensor entirely represents the moment of distribution of charges of the corresponding order at a given time.
  • often useful in physics is the Levi-Civita pseudotensor, which is included, for example, in the coordinate notation of vector and mixed products of vectors. The components of this tensor are always written almost identically (up to a scalar factor depending on the metric), and in the right orthonormal basis they are always written exactly the same way (each is equal to 0, +1 or −1).

It is easy to notice that most tensors in physics (not considering scalars and vectors) have only two indices. Tensors that have a higher valency (such as the Riemann tensor in general relativity) are found, as a rule, only in theories that are considered quite complex, and even then they often appear mainly in the form of their convolutions of lower valency. Most are symmetrical or antisymmetrical.

The simplest illustration that allows us to understand the physical (and partly geometric) meaning of tensors, and more precisely, symmetric tensors of the second rank, will probably be a consideration of the (specific) electrical conductivity tensor σ. Intuitively, an anisotropic medium, such as a crystal, or even some specially manufactured artificial material, will not generally conduct current equally easily in all directions (for example, due to the shape and orientation of molecules, atomic layers, or some supramolecular structures - one can imagine, for example, thin wires of a highly conductive metal, equally oriented and fused into a poorly conductive medium). Let us take as a basis for simplicity and specificity the latest model (well-conducting wires in a poorly conducting medium). The electrical conductivity along the wires will be large, let's call it σ 1, and across it will be small, let's call it σ 2. (It is clear that in the general case (for example, when the wires are flattened in cross-section and this flattening is also oriented equally for all wires, the electrical conductivity σ 3 will differ from σ 2, but in the case of round evenly distributed wires - σ 2 = σ 3, but not are equal to σ 1). Quite non-trivial in the general case, but quite obvious in our example, the fact is that there are three mutually perpendicular directions for which the connection between the current density vector and the strength of the electric field causing it will be connected simply by a numerical factor (in our example - this is the direction along the wires, the second - along their oblateness and the third is perpendicular to the first two).But any vector can be decomposed into components along these convenient directions:

then for each component we can write:

And we will see that for any direction that does not coincide with 1, 2 and 3, the vector will no longer coincide in the direction with unless at least two of σ 1, σ 2 and σ 3 are equal.

Moving to arbitrary Cartesian coordinates that do not coincide with these selected directions, we will be forced to include a rotation matrix to transform the coordinates, and therefore in an arbitrary coordinate system the relationship between and will look like this:

that is, the electrical conductivity tensor will be represented by a symmetric matrix.

Definition 10.1. The mapping f: L → R which is defined on linear space L and takes real values, called linear function (also linear form, linear functional) , if it satisfies two conditions:

a) f(x + y) = f(x) + f(x), x,y ∈ L;

b) f(λx) = λf(x), x ∈ L,λ ∈ R.

Comparing this definition with definition 4.1 linear operator, we will see a lot in common. If we consider the set of real numbers as a one-dimensional linear space, then we can say that a linear function is a linear operator whose image space is one-dimensional.

Let us choose some basis e = (e 1 ... e n) in the linear space L. Then for any vector x ∈ L with coordinates x = (x 1 ; ... x n) T

f(x) = f(x 1 e 1 +... + x n e n) = xi 1 f(e 1) + ... + x n f(e n) = a 1 x 1 + ... + a n x n = ax,

where a = (ai ... an), a* = /(e*), i = 1, n. Therefore, a linear function is uniquely determined by its values ​​on the basis vectors. On the contrary, if the function /(x) through the coordinates x of the vector x is expressed in the form /(x) = az, then this function is linear, and the string a is composed of the values ​​of this function on the basis vectors. Thus, a one-to-one correspondence is established between the set of linear forms defined on the linear space £ and strings of length n.

Linear forms can be added and multiplied by real numbers according to the rules:

(f + g)(x)=f(x)+g(x), (λf)(x) = λf(x).

The operations introduced in this way transform the set of linear forms in the space L into a linear space. This linear space is called conjugate space with respect to the linear space L and denote L*.

Based on the basis e chosen in the space L, we construct a basis in the dual space L*. For each vector e i from the basis e, consider the linear form fi, for which f i (е i) = 1 and f i (е j)= 0 for all vectors e j except e i. We obtain a system of linear forms f1, ..., /” e C*. Let us show that this is a linearly independent system. Let some linear combination of these forms be equal to the zero linear form / = aif1 +... + anfn = 0. The form / takes zero values ​​on all basis vectors. But

Zero values ​​of f on the basis vectors are equivalent to the equalities α i = 0, i = 1,n, and therefore the system of linear forms f 1, ..., f n is linearly independent.

The system of linear forms f 1, ..., f n is a basis in the dual space. Indeed, since this is a linearly independent system of linear forms, it is sufficient to prove that any linear form from L* is a linear combination of them. Let us choose an arbitrary linear form f from L* and let a 1 ... and n be the values ​​of the form f on the basis vectors. These values ​​uniquely define the linear shape. But the linear combination f" = a 1 f i +... + a n f n is also a linear form, which on the basis vectors takes the same values ​​a 1, ..., and n. This means that these two linear forms coincide, and we get the equality f = f" = a 1 f 1 +... + a n f n, i.e. expansion of an arbitrarily chosen linear form into a system of forms f 1, ..., f n

The above reasoning shows that the dual space L* has the same dimension, as L. The basis we constructed f 1, ..., f n depends on the choice of basis e in the space L.

Definition 10.2. Bases e 1, ..., e n and f 1, ..., f n linear space L and the dual space L* are called biorthogonal, or reciprocal , If

If the bases e 1, ..., e n and f 1, ..., f n are mutual, then the coordinates of an arbitrary form f in the basis f 1, ..., f n are the values ​​of this form on the vectors of the mutual basis e 1, .. ., e n . When considering the linear space L and the dual space L* together, the elements of each of these spaces are called vectors, but the elements of the dual space L* are called covariant vectors (covectors) , and elements from the linear space L - contravariant vectors (or just vectors). The coordinates of both are determined primarily in mutual bases, with the coordinates of contravariant vectors having an index at the top, and for covariant vectors the index at the bottom.

The notation f(x) can be looked at in two ways. Having fixed the shape f, we vary the vector x, obtaining all possible values ​​of the linear shape. But if we fix the vector x and vary the linear form f, we get a function defined on the dual space L*. It is easy to verify that this function is linear, since, according to the definition of the sum of linear forms and the product of a linear form by a number,

(f + g)(x) = f(x) + g(x), (λf)(x) = λf(x)

Thus, each vector x ∈ L corresponds to a linear form on the dual space L, or an element double conjugate space (L*)* = L**. We obtain the mapping φ: L → L**. It is easy to verify that this display linearly and what is it injectively. From injectivity it follows that dimimφ = dimL = n. But the dual space L* has the same dimension as L, and dimL** = dimL* = dimL. Thus, the dimension of the linear subspace imφ in L** coincides with the dimension of the entire double conjugate space. This means imφ = L** and the mapping φ is isomorphism. Let us note that this isomorphism is not related to the choice of any basis. Therefore, it is natural to identify linear forms defined on L* with elements of the space L. This means that the double dual space coincides with the original linear space: L** = L. If L* is dual to L, then L is dual to L*.

The reciprocity of linear space and its conjugate space indicates the symmetry of the relationship between vectors and covectors. Therefore, instead of writing f(x), it is more convenient to use another form of notation, symmetric: (f,x). We will also now denote linear forms in bold italics: (f,x). The adopted notation is similar to the notation for the scalar product, but unlike the latter, the arguments in the new notation are taken from different spaces. The record (f, x) itself can be considered as a record of a mapping defined on the set L*×L, which assigns a real number to a pair of a covector and a vector. In this case, the indicated mapping is linear in each of the arguments.

Theorem 10.1. Let b and c be two bases n-dimensional linear space L,U - transition matrix from b to c. The bases b* and c* of the dual space L*, mutual with the bases b and c, respectively, are related to each other by the relations

c* = b*(U T) -1 b* =c*U T

The coordinates f c = (f c 1 ... f c n) of the linear form f in the basis c* are the values ​​of this form on the basis vectors c = (c 1 ... c n). Let us find out how the coordinates of the form f are related in two bases c* and b*.

The bases b and c are interconnected using the transition matrix by the matrix relation c = bU (see 1.8). This relation represents the equality of strings of length n, composed of vectors. From the equality of the strings of vectors, it follows that the strings of values ​​of the linear form f on these vectors are equal :

((f,c 1) ... (f,c n)) = ((f,b 1) ... (f,b n))U,

where f b and f c are the designations of the coordinate lines of the form f in the b* and c* bases, respectively. Transposing this equality, we obtain the accepted form of connecting the coordinates of elements of linear space, in which the coordinates are written in columns:

(f c) T = U T (f b) T .

This relation means that the matrix U T is the transition matrix from the basis c*, which plays the role of the old one in the formula, to the basis b*, which plays the role of the new one. Consequently, b* = c*U T, from which, by multiplying by the matrix (U T) -1, we obtain c* = b*(U T) -1.

If the linear space L is Euclidean, then the scalar product generates an isomorphism between L and l*, independent of the basis, which allows us to identify the Euclidean space with its conjugate. Indeed, for any vector a ∈ L the mapping x → (a,x) is a linear form in L, since the scalar product is linear in the second of its arguments. A mapping ψ arises, which assigns the linear form f a (x) - (a,x) to the vector a ∈ L. This mapping is linear due to the properties of the scalar product and is injective. Injectivity follows from the fact that if (a,x) = 0 for any x ∈ L, then (a,a) = 0, i.e. a = 0. Since the linear spaces L and L* are finite-dimensional and have the same dimensions, the mapping ∈ is bijective and realizes the isomorphism of these spaces. So, for Euclidean space L* = L. In this sense, Euclidean space is a “self-adjoint” space.