1 Fundamentals of Tensor Theory This chapter summarizes the definitions and results of the tensor operations that are used in plate theory. It can be divided into two parts: 1. Tensor algebra, where only algebraic operations such as addition and multiplication come into play. 2. Tensor analysis, which also involves the concept of derivatives. The results are reviewed here without the proofs being worked out. For a detailed presentation, the readers are referred to mathematical works dedicated to tensor theory. 1.1. Tensor algebra Let us consider a 3-dimensional Euclidean vector space E, endowed with the usual scalar product (a, b) → a.b and the Euclidian norm .. A basis (g1 , g2 , g3 ), not necessarily orthonormal, is chosen beforehand for E. 1.1.1. Contravariant and covariant components of a vector Let u be a vector in E. The components of u in the basis (g1 , g2 , g3 ) are denoted by u1 , u2 , u3 and we write u = ui gi , using the Einstein summation convention over any repeated index; here, the index i varies from 1 to 3. As the basis (g1 , g2 , g3 ) is fixed, the vector u is determined by the coefficients u1 , u2 , u3 . On the other hand, vector u is also determined by the three coefficients ui ≡ u.gi , i ∈ {1, 2, 3}. Indeed, we have ∀i ∈ {1, 2, 3}, ui ≡ u.gi = (u j g j ).gi = u j (gi .g j )
[1.1]
By writing ∀i, j ∈ {1, 2, 3}, gi j ≡ gi .g j we can rewrite equation[1.1] in matrix form: ⎤⎧ 1 ⎫ ⎧ ⎫ ⎡ ⎪ ⎢ g g g ⎥⎪ u ⎪ u ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 1⎪ ⎬ ⎬ ⎢⎢⎢⎢ 11 12 13 ⎥⎥⎥⎥ ⎪ ⎨ 2⎪ ⎥ ⎢ u g g g u ⎪ = ⎪ ⎪ ⎪ 2 21 22 23 ⎥ ⎢ ⎪ ⎪ ⎪ ⎥ ⎢ ⎪ ⎪ ⎪ ⎪ ⎦ ⎣ ⎩u ⎭ ⎭ ⎩ u3 ⎪ g g g 3
31
32
[1.2]
[1.3]
33
The 3 × 3 matrix [g. . ] with components gi j , i, j ∈ {1, 2, 3}, is symmetrical. It is invertible because (g1 , g2 , g3 ) is a basis and, therefore, either of the triplets (u1 , u2 , u3 ) or (u1 , u2 , u3 ) allows us to determination of the other one.
2
Nonlinear Theory of Elastic Plates
Definitions.
[1.4]
– The contravariant components of the vector u in the basis (g1 , g2 , g3 ) are the components u1 , u2 , u3 in this basis. They are such that u = ui gi . – The covariant components of the vector u in the basis (g1 , g2 , g3 ) are the coefficients u1 , u2 , u3 defined by ui ≡ u.gi . The notation convention with superscripts and subscripts (upper and lower indices) is systematically adopted in tensor theory. The advantage of this convention, as will be seen later on, is that it allows formulae to be easily read and systematically written. Let us illustrate the concept of contravariant and covariant components in the two-dimensional space R2 . We choose a basis (g1 , g2 ) for R2 , formed of two unit vectors (g1 = g2 = 1), and we consider any vector u. In Fig. 1.1: – the contravariant components u1 , u2 of vector u are the oblique components along g1 and g2 . – the covariant components u1 , u2 are the orthogonal projection-value measures for u along g1 and g2 .
Figure 1.1: Illustration in R2 of the contravariant and covariant components of a vector u Using this example, we can see that the contravariant and covariant components are usually distinct. According to Eq. [1.3], the necessary and sufficient condition for them to be identical is that the matrix [g. . ] be equal to the identity matrix. That is, (g1 , g2 , g3 ) is orthonormal. Theorem. Let u be a vector with contravariant components ui and covariant components ui ; let v be a vector with contravariant components vi and covariant components vi . The scalar product of u and v is expressed by u.v = ui vi = ui vi
[1.5]
1.1.2. Dual basis Notation. The components of the inverted matrix for [g. . ] are designated by gi j : ∀i, j ∈ {1, 2, 3},
gi j ≡ [g. . ]−1 = g ji
⇔
ij
where
δij
(also written as δi j ) is Kronecker’s symbol
gik gk j = δij
δij = 1 if i = j . δij = 0 otherwise
and
gik gk j = δij
Fundamentals of Tensor Theory
3
Theorem and definition. The family of vectors denoted by (g1 , g2 , g3 ) and defined by gi ≡ gi j g j
⇔
gi ≡ gi j g j
[1.6]
is a basis of E. It is also called the dual basis of (g1 , g2 , g3 ), as opposed to the basis (g1 , g2 , g3 ), which is called the primal basis. It must be pointed out that the dual basis is constructed via the following chain primal basis (g1 , g2 , g3 )
→
matrix [g. . ]
→
inverted matrix [g. . ]
→
dual basis (g1 , g2 , g3 )
The following theorem gives another characterization for the dual base in addition to definition [1.6]. Theorem. – The vectors of the primal and dual bases are orthogonal:
∀i, j ∈ {1, 2, 3}, gi .g j = δij – Conversely, any triplet of vectors (a1 , a2 , a3 ) which verifies gi .a j = δij is identical to the dual basis: ∀i ∈ {1, 2, 3}, ai = gi . The following relationship is homologous to [1.2]: Theorem. gi .g j = gi j In general, the dual basis differs from the primal basis, except for the following special case: Theorem. The primal basis is orthonormal ⇔ the dual basis is identical to the primal basis. 1.1.3. Different representations of a vector Theorem. – The following relationships exist between the contravariant and covariant components of a vector u: ∀i ∈ {1, 2, 3}, ui = gi j u j , conversely ui = gi j u j We thus lower or raise the indices using the matrices gi j and gi j . – The following relationship is homologous to ui ≡ u.gi : ∀i ∈ {1, 2, 3}, ui = u.gi
4
Nonlinear Theory of Elastic Plates
Theorem and definition. A vector can be expressed in either the primal basis or in the dual basis, as follows: u ≡ ui gi = ui gi
[1.7]
These two forms are called the contravariant and the covariant representations of u. From the previous theorem, we can also write u ≡ (u.gi )gi = (u.gi )gi . Theorem. The scalar product between vectors u and v may be written in different forms u.v = ui vi = ui vi = gi j u j vi = gi j u j vi
[1.8]
1.1.4. Results related to the orientation of the 3D space The earlier results, written in three-dimensional space, may be generalized in the case of a space with n-dimensions (n being finite), using obvious notation changes. On the contrary, the results discussed in this section are only applicable to a 3-dimensional space. As the space E is 3-dimensional, we can orient it and define a vector product (cross product) in it. We then obtain the following results related to a vector or mixed product. Theorem. g1 × g2 = (g1 , g2 , g3 ) g3
g2 × g3 = (g1 , g2 , g3 ) g1
g3 × g1 = (g1 , g2 , g3 ) g2
Conversely g1 × g2 = (g1 , g2 , g3 ) g3
g2 × g3 = (g1 , g2 , g3 ) g1
g3 × g1 = (g1 , g2 , g3 ) g2 [1.9]
The vectors g1 , g2 are orthogonal to vector g3 , but they are not, in general, orthogonal to vector g3 , Fig. 1.2.
Figure 1.2: Vector product of two vectors of the primal basis
Theorem. (g1 , g2 , g3 ) =
1 (g1 , g2 , g3 )
Therefore, the primal and dual bases have the same orientation.
[1.10]
Fundamentals of Tensor Theory
5
Theorem. Hypothesis: the basis (g1 , g2 , g3 ) is right-handed (from [1.10], this amounts to assuming that the basis (g1 , g2 , g3 ) is right-handed). Then, (g1 , g2 , g3 ) =
√
g
and
1 (g1 , g2 , g3 ) = √ g
where g ≡ det[g. . ]
[1.11]
By combining [1.9] and [1.11], we obtain Theorem. √ 3 gg g Conversely g1 × g2 = √3 g g1 × g2 =
g2 × g3 =
√
g g1
g g2 × g3 = √1 g
√ 2 gg g g3 × g1 = √2 g g3 × g1 =
[1.12]
1.1.5. Tensor Definition. [1.13] By definition, a tensor of order p, where p is a nonzero integer, is a multilinear form of order p over E p . More precisely, if the form T T : E × ··· × E → R (u1 , · · · , u p ) → T (u1 , · · · , u p ) is a tensor of order p, it satisfies the following p-linearity properties: ∀(u1 , · · · , u p ) ∈ E p , ∀i ∈ [1, p], ∀λ ∈ R, ∀v ∈ E, = λ T (u1 , · · · , ui , · · · , u p ) T (u1 , · · · , λui , · · · , u p ) T (u1 , · · · , ui + v, · · · , u p ) = T (u1 , · · · , ui , · · · , u p ) + T (u1 , · · · , v, · · · , u p )
[1.14]
Tensorial algebra is, thus, multilinear algebra. Let us adopt the following generic system of notations: – a 1st-order tensor is denoted by a letter with a bar over it, for example a¯ , – a 2nd-order tensor is usually denoted by a letter with two bars over it (for instance, T¯ ). However, in this book we will use bold-type symbols (as for vectors), T for instance, – a tensor of any order ≥ 3 is usually denoted by a letter with as many bars over it as the order number. However, to make it easier to write we will use a letter with double lines, for instance T. Definition [1.13] is intrinsic in that it does not call upon the basis of E. In the following section, we will give the image of a tensor by means of the basis (g1 , g2 , g3 ) (and its dual basis (g1 , g2 , g3 )).
6
Nonlinear Theory of Elastic Plates
Theorem and definition. Let a¯ be a 1st-order tensor. We have ∀ vector u ∈ E,
a¯ (u) = ai ui = ai ui
[1.15]
where – the coefficients ai ≡ a¯ (gi ) , i ∈ {1, 2, 3}, are called the covariant components of the 1storder tensor a¯ , – and the coefficients ai ≡ a¯ (gi ) , i ∈ {1, 2, 3}, are called the contravariant components of the 1st-order tensor a¯ . The notations used here, ai , ai , are consistent with those used in definition [1.4]. This is because, as shall be seen afterwards, these are also the covariant and contravariant components of a vector a. The covariant and contravariant components are related through the expression ∀i ∈ {1, 2, 3}, ai = gi j a j . The tensor a¯ is defined completely when we know all components ai or ai . The statement for a 2nd-order tensor is similar: Theorem and definition. Let T be a 2nd-order tensor. We have ∀ vectors u, v ∈ E,
T(u, v) = T i j v j ui = T i j v j ui = T i j v j ui = T i j v j ui
[1.16]
where – the coefficients T i j ≡ T(gi , g j ) are called the 2-covariant components of T, – the coeffecients T i j ≡ T(gi , g j ) are called the 2-contravariant components of T, – and the coefficients T i j ≡ T(gi , g j ) , T i j ≡ T(gi , g j ) are called the 1-covariant-1contravariant or the mixed components of T. These components are related using: ∀i, j ∈ {1, 2, 3}, T i j = gik T k j
T i j = gik T k j = T ik gk j
T i j = gik T k g j = gik T k j
T i j = T i k gk j = gik T k g j [1.17]
Tensor T is completely defined when we know all the components T i j or T i j or T i j or T i j . The previous statement can easily be generalized for any higher order tensor. For example, for a 3rd-order tensor we have: Theorem and definition. Let T be a 3rd-order tensor. We have ∀ vectors u, v, w ∈ E,
T(u, v, w) = T i jk wk v j ui = T i j k wk v j ui = · · ·
where the coefficients T i j k ≡ T(gi , g j , gk ) , for instance, are called the mixed components of T. We move from one component type to another using relationships such as T i jk = gim T m jk .
Fundamentals of Tensor Theory
7
The different types of components of a tensor differ simply in the upper or lower position of their indices. As a general rule, we lower or raise the indices of components of a 2nd-order tensor T using the coefficients gi j and gi j : – to lower a contravariant index, we use gi j : T .. i. .. = gi j T .. .j .. – to raise a covariant index we use gi j : T .. .i .. = gi j T .. .j .. Theorem. The set of tensors of order p (a given integer), with the internal law ‘addition of mappings’ and the external law ‘multiplication of a mapping by a scalar’ is a vector space. • We will adopt two language conventions that will prove to be very useful. Convention. By convention, we say that a scalar is a 0-order tensor.
[1.18]
This convention is an abuse of language as there is no meaning to a form with 0-times linearity, contrary to a linear or bilinear form, both of which are perfectly defined. However, as will be seen in section 1.1.9, this convention allows us to say, for example, that the doubly-contracted product S : T of two 2nd-order tensors S and T is a tensor of the order 2 + 2 − 2 × 2 = 0, that is, S : T is a scalar. The second convention is based on the following result: Theorem and definition.
[1.19]
(a) For any vector a, there exists one and only one linear form denoted by a¯ that verifies ∀u ∈ E, a.u = a¯ (u). This form, a¯ , is called the linear form associated with vector a. (b) for any linear form a¯ , there exists one and only one vector denoted by a that verifies ∀u ∈ E, a¯ (u) = a. u. This vector is a = a¯ (gi )gi and it is called the vector associated with the linear form a¯ . In the preceding statement, it is permissible to use the same letter, a, for the vector a as well as for the 1st-order tensor a¯ . Indeed: – from definition [1.4], ai ≡ a.gi is the i-th covariant component of the vector a, – according to the definition after [1.15], ai ≡ a¯ (gi ) is the i-th covariant component of the tensor a¯ . and we know that a.gi = a¯ (gi ). Theorem [1.19] leads us to adopt the following convention: Convention. A 1st-order tensor a¯ will be designated, by abuse of language, by its associated vector a. Conversely, a vector a will be called a 1st-order tensor. [1.20] This is why we write a(u) instead of a¯ (u): vector a is regarded as the linear form E u → a.u ∈ R and we have the equality a(u) = a.u where a on the left-hand side is understood as a linear form while the a of the right-hand side is a vector in E. • In order to go further we need the following theorem:
8
Nonlinear Theory of Elastic Plates
Theorem. A tensor of the order p relates q given vectors (q ≤ p) to a tensor of order p − q, this tensor being linearly dependent on each of the q vectors. [1.21] As an application of this theorem, we will show that we can regard 2nd-order tensors as linear mappings. To do this, let us consider a 2nd-order tensor T. – By definition, T is the bilinear form T: E×E → R (u, v) → T(u, v)
[1.22]
which maps each pair of vectors (u, v) into the scalar T(u, v). – Let us consider the mapping of E in E: T( . , ) : E → E v → T( . , v)
[1.23]
(the first variable of T, symbolized by the dot, is left free; the second variable, symbolized by , takes the value v). According to theorem [1.21], for every vector v, T( . , v) is a tensor of order 2 − 1 = 1, that is, a vector, according to convention [1.20]. In addition, the bilinearity of T implies that the vector T( . , v) is linearly dependent on v. Thus, [1.23] is a linear mapping of E in E which maps every vector v to a vector T( . , v), linearly dependent on v. From this analysis, we can view the 2nd-order tensor T in either of the two following ways: – as the bilinear form [1.22] (we operate on E × E and arrive in R); – or as a linear mapping [1.23], T : v → a vector that is linearly dependent on v (we operate on E and arrive in E). This double point of view is specific to second-order tensors. It enables us to consider that the terms “second-order tensor” and “linear mapping” are synonymous. 1.1.6. Metric tensor Definition. The metric tensor, denoted by g, is the second-order tensor defined by ∀u, v,
g(u, v) ≡ u.v
[1.24]
The notation g used is consistent with the notations introduced earlier in the chapter. Indeed: – according to the notation [1.2], we have gi j ≡ gi .g j , – from the definition after [1.16], the image of two vectors, gi , g j under the bilinear form g is gi j ≡ g(gi , g j ), which is equal to the 2-covariant component of g, – and we know that g(gi , g j ) = gi .g j . It will be seen in the sequel that the metric tensor is equal to the second-order identity tensor denoted by I.
Fundamentals of Tensor Theory
9
1.1.7. Tensor product We will look at two types of algebraic operations carried out on tensors: tensor product and contracted product. For 2nd-order tensors, we will add two more operations: transposition and inversion. The concept of tensor product will be discussed using the examples of the tensors S, T and U of order 2, 3, and 2, respectively, knowing that the argument can be generalized to any n-tuple tensors of any order. Definition. The tensor product of S with T, denoted by S⊗T, is the tensor of the order 2+3 = 5 defined by ∀ vectors u, v, w, x, y ∈ E, (S ⊗ T)(u, v, w, x, y) = S(u, v) T(w, x, y)
[1.25]
(The mapping S ⊗ T thus defined is a multilinear form of order 5). Theorem. (a) The ’tensor product’ operation is associative: (S ⊗ T) ⊗ U = S ⊗ (T ⊗ U), which makes it possible to write S ⊗ T ⊗ U without parentheses. The image of the product S ⊗ T ⊗ U is given by (S ⊗ T ⊗ U)(s, t, u, v, w, x, y) = S(s, t) T(u, v, w) U(x, y). (b) The tensor product operation is (left- and right-) distributive over the addition: T ⊗ (S + U) = T ⊗ S + T ⊗ U (S + U) ⊗ T = S ⊗ T + U ⊗ T (c) The ’tensor product’ operation is not commutative: S ⊗ T T ⊗ S. By writing the preceding theorem for 1st-order tensors (that is, for vectors) we obtain ∀ vectors a, b, c, (a ⊗ b) ⊗ c = a ⊗ (b ⊗ c), which makes it possible to write a ⊗ b ⊗ c without parentheses. The image of the product a ⊗ b ⊗ c is given by (a ⊗ b ⊗ c)(u, v, w) = (a.u)(b.v)(c.w). In particular: (gi ⊗ g j ⊗ gk )(u, v, w) = ui v j wk . ¯ where a¯ and b¯ are Remark. According to convention [1.20], a ⊗ b in fact designates a¯ ⊗ b, first-order tensors associated with a and b! The property of associativity can be generalized to any tensor product that involves several vectors, and we write, without parentheses, ai ⊗ a j ⊗ · · · ⊗ aq . The following theorem gives the components of a tensor product relative to the vectors of the bases g1 , g2 , g3 or their dual bases g1 , g2 , g3 . Theorem. The components of a tensor product are the products of the components of each tensor: (S ⊗ T)i jkm = S i j T km (S ⊗ T)i jk m = S i j T k m etc.
(S ⊗ T ⊗ U)i jkmnp = S i j T km Unp (S ⊗ T ⊗ U)i jk mn p = S i j T k m U n p etc.
10
Nonlinear Theory of Elastic Plates
Applying the preceding theorem to 1st-order tensors gives (a ⊗ b)i j = ai b j (a ⊗ b)i j = ai b j
(a ⊗ b ⊗ c)i jk = ai b j ck (a ⊗ b ⊗ c)i j k = ai b j ck etc.
In particular: (gi ⊗ g j ⊗ gk )mn = δi δmj δkn 1.1.8. Tensor basis - Different representations of a tensor In [1.7] we saw that a vector u may be written in two ways: u = ui gi = ui .gi , the last two sides being called the contravariant and covariant representations of u. The following theorem shows that a 2nd-order tensor may be decomposed in different ways in bases that are called tensor bases. Theorem and definition. 1. The 32 tensors, gi ⊗ g j , i, j ∈ {1, 2, 3}, form a basis, called the 2-contravariant tensor basis of the (32 -dimension) vector space of 2nd-order tensors. Any 2nd-order tensor T can be decomposed over this basis in a unique manner, as follows: T = T i j gi ⊗ g j recalling that T i j s are the 2-covariant components of T: T i j ≡ T(gi , g j ). In other words, the components of T in the basis gi ⊗ g j are the 2-covariant components of T. 2. Similarly, the 32 tensors gi ⊗ g j , i, j ∈ {1, 2, 3}, form a basis that is called the 1-covariant, 1-contravariant tensor base. Any 2nd-order tensor T can be uniquely decomposed in this basis as given below ∀ 2nd-order tensor T,
T = T i j gi ⊗ g j
where T i j ≡ T(gi , g j )
3. The 32 tensors gi ⊗ g j , i, j ∈ {1, 2, 3}, form a basis that is called a 1-contravariant, 1covariant tensor basis : ∀ 2nd-order tensor T,
T = T i j gi ⊗ g j
where T i j ≡ T(gi , g j )
4. Finally, the 32 tensors gi ⊗ g j , i, j ∈ {1, 2, 3}, form a basis that is called the 2-covariant tensor basis : ∀ 2nd-order tensor T,
T = T i j gi ⊗ g j
where T i j ≡ T(gi , g j )
5. To summarize: T = T i j gi ⊗ g j = T i j gi ⊗ g j = T i j gi ⊗ g j = T i j gi ⊗ g j
[1.26]
These four decompositions of tensor T are called the four representations (or representatives) of tensor T. They are, respectively, 2-covariant, 1-contravariant 1covariant, 1-covariant 1-contravariant, and 2-contravariant representations.
Fundamentals of Tensor Theory
11
The preceding theorem can be generalized to tensors of any order greater than 2. For example, the statement for 3rd-order tensors reads: Theorem. For 3rd-order tensors, the 33 tensors gi ⊗ g j ⊗ gk , i, j, k ∈ {1, 2, 3} (or gi ⊗ g j ⊗ gk etc.), form a basis in the (33 -dimension) vector space of 3rd-order tensors. The 23 representations of tensor T are T = T i jk gi ⊗ g j ⊗ gk = T i jk gi ⊗ g j ⊗ gk = T i jk gi ⊗ g j ⊗ gk = T i jk gi ⊗ g j ⊗ gk = etc. The representations of a tensor differ only in the position (superscript or subscript) of the indices. i Definition. Let T be a tensor of order p and let T = T i jk m··· ··· g ⊗ g j ⊗ gk ⊗ g ⊗ gm · · · be one of jk m··· its possible representations, such that the components T i ··· include q superscript indices and r subscript indices (q + r = p). The values q and r are called the variances of T . A representation of the tensor may be q-times contravariant, r-times covariant, or, more briefly, have the variance (q, r).
– if q = 0, the representation is said to purely covariant, – if r = 0, the representation is said to be purely contravariant, – if q 0 and r 0, the representation is said to be mixed. For instance, a vector has two representations of variance (1, 0) and (0, 1), respectively. Theorem. The 4 representations of the metric tensor g defined in [1.24] are g = gi j gi ⊗ g j = gi j gi ⊗ g j = δij gi ⊗ g j = δij gi ⊗ g j
[1.27]
• Basis of a tensor product. As a tensor product is a tensor, we will try to determine its tensor basis. As an example, let us consider the product S ⊗ T between two tensors and the product S ⊗ T ⊗ U of three tensors, where S, T and U are of order 2, 3 and 2, respectively. Theorem. Let S = S i j gi ⊗g j = S i j gi ⊗g j = · · · and T = T km gk ⊗g ⊗gm = T km gk ⊗g ⊗gm = · · · be two tensors. The representations of the tensor product S ⊗ T are S⊗T = = = =
S i j T km gi ⊗ g j ⊗ gk ⊗ g ⊗ gm S i j T km gi ⊗ g j ⊗ gk ⊗ g ⊗ gm S i j T km gi ⊗ g j ⊗ gk ⊗ g ⊗ gm etc.
[1.28]
Thus, the product S ⊗ T is decomposed over the tensor product of the two bases of S and T. The basis of the tensor product is the tensor product of the bases. Applying this result to two vectors a, b yields a ⊗ b = ai b j gi ⊗ g j = ai b j gi ⊗ g j = · · ·
12
Nonlinear Theory of Elastic Plates
In other words: ∀i, j ∈ {1, 2, 3}, (a ⊗ b)i j = ai b j ,
(a ⊗ b)i j = ai b j , etc.
Theorem [1.28] can be easily generalized to the tensor product of three tensors: Theorem. Let us consider 3 tensors S, T and U, where S, T are decomposed as in theorem [1.28] and U = Unp gn ⊗ g p = Un p gn ⊗ g p = · · · . The representations of the tensor product S ⊗ T ⊗ U are S ⊗ T ⊗ U = S i j T km Unp gi ⊗ g j ⊗ gk ⊗ g ⊗ gm ⊗ gn ⊗ g p = S i j T km Un p gi ⊗ g j ⊗ gk ⊗ g ⊗ gm ⊗ gn ⊗ g p = etc.
[1.29]
Applying this result to three vectors a, b, c gives a ⊗ b ⊗ c = ai b j ck gi ⊗ g j ⊗ gk = ai b j ck gi ⊗ g j ⊗ gk = ··· 1.1.9. Contraction - Contracted Product Contraction is an algebraic operation commonly carried out on tensors. We will explain how it works using the example of a 4th-order tensor. We will then generalize this concept to a tensor of any order. Theorem and definition. Let us consider a 4th-order tensor:
[1.30]
T: E×E×E×E → R (u, a, v, b) → T(u, a, v, b) The mapping T c , defined by: Tc : E × E → R (u, v) → T c (u, v) = T(u, gk , v, gk ) = T(u, gk , v, gk )
(summation over k)
is independent of the choice in E of the basis (g1 , g2 , g3 ) (and its dual basis (g1 , g2 , g3 )) ; it is a second-order tensor. The covariant components of this tensor, for example, are (T c )i j = T ik j k = T i kjk . This tensor is called the contracted tensor of T on the vector arguments number 2 and 4, or over the indices 2 and 4. The operation described is called the contraction of T over the indices 2 and 4. Two important rules must be kept in mind regarding contraction: 1. Contraction can only be carried out over contrasting indices (one upper index and one lower index, or vice versa). 2. The pair of indices over which the operation is carried out must be explicitly stated as there are several different possibilities. For instance, we can have two different contractions of the same tensor T: – with the representation T = T i jk gi ⊗ g j ⊗ gk ⊗ g , contraction over the indices (2,3) yields the tensor Tc = T i j j gi ⊗ g ,
Fundamentals of Tensor Theory
13
– with the representation T = T i jk gi ⊗ g j ⊗ gk ⊗ g , contraction over the indices (1,2) yields another tensor, Tc = T i ik gk ⊗ g . It can be verified that theorem [1.30] extends to a tensor of any order and we arrive at the following general definition: Definitions.
[1.31]
– The contraction of a tensor of order p is defined in a manner similar to [1.30]. It leads to a tensor of the order p − 2. – The double contraction of a tensor of order p consists of carrying out two successive contractions over two couples of indices. This yields a tensor of order p − 2 × 2. – Higher order contractions of tensors follow the same logic. Clearly, the maximum order of the contraction is limited by the number of the remaining available indices. – The total contraction of a tensor of even order 2p is obtained by contracting p times on p pairs of indices. The pairs of indices over which the contraction is carried out must be specified. The result is a 0-order tensor, i.e. a scalar. • The contracted product of two tensors is a concept derived from the tensor product and the contraction of this product. Definitions. Let us consider two tensors S and T of orders p and q, respectively.
[1.32]
– The (singly-contracted) product of S and T , denoted by S .T , is the tensor of order p+q−2 which results from contracting the tensor product S ⊗ T over the indices of the ranks p and p + 1, while choosing the compatible representations of S and T , that is, representations where the indices involved are contrasting (an upper index and a lower index). – The doubly-contracted product of S and T , denoted by S : T , is the tensor of order p+q−4 which results from doubly-contracting the tensor product S ⊗ T over the last index of S and the first index of T , then over the penultimate index of S and the second index of T , while choosing compatible representations for S and T . . – The same method works for any x times-contracted product S ..T . The rule for this can be summarized as follows: successive contractions of close indices choosing two compatible representations of S and T . The maximum order of the contraction is, clearly, limited by the number of remaining available indices. The x times-contracted product of S and T is a tensor of order p+q−2x. – The totally contracted product of two tensors, S and T of the same order p is the total contraction of S ⊗ T . We then obtain a scalar. Contracted products are very widely used in tensor calculations. Here are some frequently used relationships for these operations that are useful to know. They are obtained by simply applying definition [1.32]. The contracted product of two vectors u and v is the scalar: u.v = ui vi = ui vi
14
Nonlinear Theory of Elastic Plates
Remark. We see that the ’dot’ symbol, used to designate the product of two tensors, is consistent with the same symbol used for the scalar product of two vectors. Indeed, the operation u.v may be interpreted in two equivalent ways: – it may be taken to be the scalar product of two vectors u and v, which can be expressed as u.v = ui vi according to [1.5] ; – it may also be understood to be the contracted product of two 1st-order tensors u and v, which can be written, as has been seen above, as u.v = ui vi , too. The contracted product of a 2nd-order tensor T and a vector u is expressed as follows T.u = T i j u j gi = T i j u j gi = T i j u j gi = T i j u j gi Remark. By virtue of the point of view [1.23], T is a linear mapping and its image T(x) over a vector x of E is a vector of E. The concept of a contracted product makes it possible to rewrite this linear mapping in the form: T: E → E x → T.x
[1.33]
The equality T(x) = T.x conforms to the typical use of linear mappings in mathematics: the image of a vector x under a linear mapping f is denoted by f.x instead of by f (x), using the same ’dot’ symbol as in a scalar product. We have the associative property: ∀ vectors u, v, ∀ 2nd-order tensor T, v.(T.u) = (v.T).u
[1.34]
This enables us to write v.T.u without parentheses. It can easily be verified that v.T.u = vi T i j u j = vi T i j u j = vi T i j u j = vi T i j u j Another example involving associativity is ∀ vectors u, v, ∀ 2nd-order tensors S, T, v.(S.T).u = v.S.(T.u) such that we can write v.S.T.u without any ambiguity. The following equality makes it possible to transform the image T(u, v), under the bilinear form T, to a more operational equivalent expression that contains contracted products: ∀ vectors u, v ∈ E, T(u, v) = (T.v).u, which can be rewritten without parentheses thanks to the property of associativity [1.34]: ∀ vectors u, v ∈ E, T(u, v) = u.T.v Hence the following expressions for the components of the tensor T, generalizing those for a vector u, ui = u.gi , ui = u.gi : – by making u = gi , v = g j , we obtain T i j ≡ T(gi , g j ) = gi .T.g j , – similarly, by making u = gi , v = g j , we obtain T i j = gi .T.g j , – and other similar expressions for T i j , T i j . The contracted product of a tensor product and a vector can be easily calculated using the example of the following relationship ∀ vectors a, b, c ∈ E, (a ⊗ b).c = a(b.c)
Fundamentals of Tensor Theory
15
1.1.10. Results specific to 2nd-order tensors Representative matrices of a 2nd-order tensor Given that a 2nd-order tensor is either a bilinear form or a linear mapping, we can talk about its representative matrices in a given basis. Notations. Given a basis (g1 , g2 , g3 ) of E, we represent a 2nd-order tensor T by one of the three 3 × 3 matrices given below, called the representative matrices of T (in the considered basis): 1. The matrix denoted by [T . . ] contains the 2-covariant components of T. It is understood that the component in row i and column j of the matrix is equal to T i j . In other words, the first index is the row number and the second index is the column number: ⎡ ⎤ ⎢⎢⎢ T 11 T 12 T 13 ⎥⎥⎥ ⎢⎢ ⎥ ⎢ [T . . ] = ⎢⎢ T 21 T 22 T 23 ⎥⎥⎥⎥ ⎣ ⎦ T 31 T 32 T 33
[1.35]
2. The matrix denoted by [T . . ] contains the 2-contravariant components of T. It is understood that the component in row i and column j in the matrix is equal to T i j : ⎡ 11 12 13 ⎤ ⎢⎢⎢ T T T ⎥⎥⎥ ⎥ ⎢ [T . . ] = ⎢⎢⎢⎢ T 21 T 22 T 23 ⎥⎥⎥⎥ ⎣ 31 32 33 ⎦ T T T
[1.36]
3. The matrix denoted by [T . . ] contains the mixed components of T. It is understood that the component in row i and column j of the matrix is equal to T i j . In other words, the upper index is the row number and the lower index the column number (this is the so-called ‘uprow-lowcol’ convention): ⎡ 1 ⎤ ⎢⎢⎢ T 1 T 1 2 T 1 3 ⎥⎥⎥ ⎢⎢⎢ 2 2 2 ⎥ [T . ] = ⎢⎢ T 1 T 2 T 3 ⎥⎥⎥⎥ ⎣ 3 ⎦ T 1 T 32 T 33 .
[1.37]
A 2nd-order tensor is completely determined when one of its representative matrices in a given basis is completely determined and vice versa. Transpose of a 2nd-order tensor Definition By definition, the transpose of a 2nd-order tensor T , denoted by TT , is a 2nd-order tensor that verifies ∀x, y ∈ E, x.T.y = y.TT .x
[1.38]
Before stating the next theorem, let us recall the four possible representations of a 2nd-order tensor T [1.26]: T = T i j gi ⊗ g j = T i j gi ⊗ g j = T i j gi ⊗ g j = T i j gi ⊗ g j
16
Nonlinear Theory of Elastic Plates
Theorem. The 4 representations of the transpose of T are TT = = = =
(T T )i j gi ⊗ g j (T T )i j gi ⊗ g j (T T )i j gi ⊗ g j (T T )i j gi ⊗ g j
= = = =
T ji gi ⊗ g j T ji gi ⊗ g j T j i gi ⊗ g j T ji gi ⊗ g j
[1.39]
These relationships demonstrate how to obtain the components for TT using those of T: – the upper or lower position of the indices is maintained, – the order of the indices, what comes first and what comes last, is switched. The representative matrix (containing the 2-covariant or 2-contravariant components) of the transposed tensor TT in a given basis, is the transpose of the matrix of T in the same basis. This justifies the terminology ’transpose’. The following result can easily be verified: Theorem. ∀a, b ∈ E, (a ⊗ b)T = b ⊗ a
[1.40]
Symmetric and antisymmetric tensors Definitions. 1. A tensor T is symmetric if it is equal to its transpose, i.e. T = TT . 2. A tensor T is antisymmetric if it is equal to the negative of its transpose. i.e. TT = −T. According to the above definition and relationship [1.38], we have the following results: T is symmetric ⇔ ∀x, y ∈ E, x.T.y = y.T.x T is antisymmetric ⇔ ∀x, y ∈ E, x.T.y = −y.T.x
[1.41]
Theorem. Translating the symmetry of a 2nd-order tensor in terms of components. A 2nd-order tensor is T symmetric ⇔ T i j = T ji ∀i, j, T i j = T ji Ti j = T ji
or
T i j = T ji
: symmetry of the 2-contravariant components : symmetry of the 2-covariant components [1.42] : symmetry of the mixed components
Let us consider a symmetric tensor, T. As the components T i j and T j i are equal, we can simply write them as T ij , without specifying the order in which the indices i, j are arranged. Similarly, the representative matrix [T . . ] in [1.37] is denoted by [T .. ]. The representative matrices [T .. ] or [T .. ] are asymmetric. On the contrary, the matrix [T .. ] is not generally symmetric. Identity tensor Theorem. There exists one and only one 2nd-order tensor, called the (2nd-order) unit or identity tensor) and denoted I, such that ∀ 2nd-order tensor T,
T.I = I.T = T
Fundamentals of Tensor Theory
17
This tensor is the metric tensor g defined in [1.24]: I = g = gi j gi ⊗ g j = gi j gi ⊗ g j = δij gi ⊗ g j = δij gi ⊗ g j
The identity tensor is symmetric. Theorem. The image of any vector x by the unit tensor is vector x itself: ∀x ∈ E, I.x = x
[1.43]
The representative matrix (containing the mixed components [1.37]) of the identity tensor in the basis (g1 , g2 , g3 ) is the 3 × 3 unit matrix. Product of second-order tensors We can verify that the singly-contracted product S.T of two second-order tensors S and T, defined in [1.32], is equal to the composition of the linear mappings S and T. The usual symbol for composition of functions, ◦, is replaced here by the dot. We have S.T = (S .T )i j gi ⊗ g j = S i k T kj gi ⊗ g j We can deduce from this that the representative matrix (containing the mixed components [1.37]) of the product S.T in the basis (g1 , g2 , g3 ) is the product of the representative matrices of S and of T in the same basis, which justifies the term ’product’. Inverse of a 2nd-order tensor We can easily verify the following theorem: Theorem and definition. Let T be a 2nd-order tensor. If there exists a tensor S such that S.T = T.S = I, then this tensor S is unique. We call it the inverse of the tensor T and denote it by T−1 . The inverse of the tensor T is the inverse linear mapping of T. The representative matrix (containing the mixed components [1.37]) of the inverse tensor T−1 in the basis (g1 , g2 , g3 ) is the inverse of the matrix for T in the same basis. Orthogonal tensor Definition. A tensor T is said to be orthogonal if its inverse is equal to its transpose: T−1 = TT . An orthogonal tensor is a vector isometry. It represents a rotation in mechanics.
18
Nonlinear Theory of Elastic Plates
1.1.11. Results specific to 4th-order tensors 4th-order identity tensor Definition. The 4th-order identity tensor denoted by I is, by definition, the tensor such that ∀ 4-order tensor T, I : T = T : I = T
[1.44]
Theorem. I ≡ gi ⊗ g j ⊗ g j ⊗ gi = gi ⊗ g j ⊗ g j ⊗ gi = gi ⊗ g j ⊗ g j ⊗ gi = gi ⊗ g j ⊗ g j ⊗ gi
[1.45]
Hence the 4-covariant components of the identity tensor I: ∀i, j, k, ∈ {1, 2, 3}, Ii jk = δi δ jk
[1.46]
If we work with an orthonormal basis (e1 , e2 , e3 ), we have very simply I = ei ⊗ e j ⊗ e j ⊗ ei Transposes of a 4th-order tensor Definition. Let T be a 4th-order tensor. – The major transpose of T, denoted by TT , is the 4th-order tensor whose 4-covariant components are ∀i, j, k, ∈ {1, 2, 3},
(TT )i jk ≡ Tki j
[1.47]
– The first minor transpose of T, denoted by TT 1 , is the 4th-order tensor defined by ∀i, j, k, ∈ {1, 2, 3},
(TT 1 )i jk ≡ T jik
[1.48]
– The second minor transpose of T, denoted by TT 2 , is the 4th-order tensor defined by ∀i, j, k, ∈ {1, 2, 3},
(TT 2 )i jk ≡ Ti jk
[1.49]
1.2. Tensor analysis So far, we have worked with a vector space and a single basis; we have defined the concepts of contravariant and covariant components, and of dual basis, all of these concepts being related to the chosen basis. We now move to tensor analysis where we work with an affine space (instead of with a vector space) and we introduce a basis at each point in this space. This basis may vary from one point to another. Let us consider, then, a Euclidean three-dimensional affine space E, with the associated vector space E, endowed with the usual scalar product (a, b) → a.b and the Euclidean norm .. An arbitrary point in E is denoted by Q (this is a notation in plate theory, Q designates the current position of an arbitrary particle of the plate; the letter P, which is more natural, is reserved for a particle located on the mid-surface of the plate).
Fundamentals of Tensor Theory
19
1.2.1. Curvilinear coordinates Definition. – A curvilinear coordinate system is, by definition, a diffeomorphism ψ (that is, a differentiable bijection whose reciprocal bijection is also differentiable), defined over an open set O of R3 and whose codomain is a domain Ω in E (Fig. 1.3): ψ :
O ⊂ R3 → Ω ⊂ E (ξ1 , ξ2 , ξ3 ) → Q
[1.50]
– The scalars ξ1 , ξ2 , ξ3 are called curvilinear coordinates. – The i-th coordinate line in O, i ∈ {1, 2, 3}, is the set (ξ1 , ξ2 , ξ3 ) ∈ O | ξi is variable, the other two coordinates are fixed The image under ψ of the i-th coordinates line in O is called the i-th coordinate line (in E). – The coordinate surface ξi = const in O – where i ∈ {1, 2, 3} and const designates a given constant – is the set (ξ1 , ξ2 , ξ3 ) ∈ O | ξi = const, the other two coordinates are variable The image under ψ of the coordinate surface ξi = const in O is called the coordinate surface ξi = const (in E).
Figure 1.3: Curvilinear coordinate system (the figure is in 2D for an easy overview) 1.2.2. Natural basis - Natural frame Definition. – The natural basis in a point Q is the basis defined by the vectors gi ≡
∂Q , ∂ξi
i ∈ {1, 2, 3}
(these vectors do form a basis as ψ is a diffeomorphism). – The natural local frame at point Q is, by definition, (Q; g1 , g2 , g3 ).
[1.51]
20
Nonlinear Theory of Elastic Plates
The natural basis defined in [1.51] will henceforth play the role of the basis (g1 , g2 , g3 ) considered in tensor algebra. By definition, the vectors of the natural basis are tangent to the curvilinear coordinate lines, see Fig. 1.3. In general, the natural basis is neither orthogonal nor normal. They vary, in general, from one point Q to the other, except in the case of Cartesian coordinates. 1.2.3. Derivatives of the natural basis vectors - Christoffel symbols Definition. ∀i, j ∈ {1, 2, 3}, we decompose the vector
∂gi in the basis (g1 , g2 , g3 ) as follows ∂ξ j
∂gi ≡ γikj gk ∂ξ j
[1.52]
The coefficients γikj , functions of (ξ1 , ξ2 , ξ3 ), are called the Christoffel symbols. The Christoffel symbols are denoted by γikj (lower case gamma) as the vectors gi , gk in [1.52] are defined on a point Q in the current configuration of the body. In section 5.2, we will denote them by Γkij (uppercase gamma) to remind ourselves that we are working therein in the initial configuration of the plate. Property. Christoffel symbols are symmetric with respect to their lower indices: ∀i, j, k ∈ {1, 2, 3}, γikj = γkji
[1.53]
Theorem. Recalling that (g1 , g2 , g3 ) designates the dual basis of (g1 , g2 , g3 ), we have ∀i, j ∈ {1, 2, 3},
∂gi = −γijk gk ∂ξ j
[1.54]
Theorem. With the notation g ≡ det[g. . ] in [1.11], we have √ 1 ∂g 1 ∂ g = = γ jij ∀i ∈ {1, 2, 3}, √ g ∂ξi 2g ∂ξi
[1.55]
1.2.4. Covariant derivative Let us consider a vector field v, function of point Q; this field is a composite function of the coordinates (ξ1 , ξ2 , ξ3 ) via the system of coordinates [1.50]: (ξ1 , ξ2 , ξ 3 ) → Q = Q(ξ1 , ξ2 , ξ3 ) → v(Q) = v Q(ξ1 , ξ 2 , ξ3 ) We will decompose v in the natural basis v = vi gi = vi gi and denote it in shortened form by ∂• •,i ≡ i . ∂ξ
Fundamentals of Tensor Theory
21
Theorem and definition. We have ∀ j ∈ {1, 2, 3}, v, j = vi| j gi = vi | j gi
[1.56]
where the coefficient vi| j (resp. vi | j ), called the covariant derivative of the contravariant component vi (resp. covariant vi ) of v, is defined by vi| j ≡ vi, j + γijk vk vi | j ≡ vi, j − γikj vk
[1.57]
Thus, the coefficients vi| j (resp. vi | j ), i ∈ {1, 2, 3}, are the contravariant (resp. covariant) components of vector v, j . An important property of the covariant derivatives is that they follow the same rules of derivation as a classical derivation of products. For instance, consider the scalar product of two vectors u and v. By agreeing that the covariant derivative of a scalar function coincides with the classical derivative, we have (ui vi ), j = (ui vi )| j = ui | j vi + ui vi| j
[1.58]
• We will extend the concept of covariant derivative to a 2nd-order tensor field T which is a function of point Q, and is, therefore, a composite function of the coordinates (ξ1 , ξ2 , ξ3 ): (ξ1 , ξ2 , ξ 3 ) → Q = Q(ξ1 , ξ2 , ξ3 ) → T(Q) = T(Q(ξ1 , ξ2 , ξ3 )) We will decompose T in the natural basis in accordance with [1.26]: T = T i j gi ⊗ g j = T i j gi ⊗ g j = T i j. gi ⊗ g j = T .i j gi ⊗ g j . Theorem and definition. We have ∀k ∈ {1, 2, 3}, T,k = T |i jk gi ⊗ g j = T i j | k gi ⊗ g j = T i j. | k gi ⊗ g j = T .i j | k gi ⊗ g j
[1.59]
where the coefficient T |i jk (resp. T i j | k , T i j. | k , T .i j | k ), called the covariant derivative of the
contravariant component T i j (resp. covariant T i j , mixed T i j. , T .i j ) of T, is defined by ≡ T ,ki j T |i jk T i j | k ≡ T i j,k T i j. | k ≡ T i j. ,k T .i j | k ≡ T .i j
,k
i + T i γkj + T j γk − T i γjk − T j γik − T j. γik + T i . γkj
[1.60]
i + T . j γk − T .i γjk
The covariant derivative of a product of a 2nd-order tensor T and a vector v can be calculated using a formula similar to [1.58]: (T i j v j )|k = T |ki j v j + T i j v j |k The covariant derivatives of any other type of product are obtained in a similar way.
[1.61]
22
Nonlinear Theory of Elastic Plates
1.2.5. Expressions for differential operators in curvilinear coordinates Let T generically denote a tensor field, which may be of the order 0, 1, 2 (scalar field f , vector field v or second-order tensor field T), depending on point Q and defined over a domain Ω in E: T : Ω ⊂ E → R or E or E ⊗ E Q → T (Q) We will consider that the tensor field T is a composite function of the curvilinear coordinates (ξ1 , ξ2 , ξ3 ) by the composition of the preceding mapping and [1.50], compound function that is again denoted by T : T :
O ⊂ R3 → Ω ⊂ E → R or E or E ⊗ E (ξ1 , ξ 2 , ξ3 ) → Q → T (Q) ≡ T (ξ1 , ξ2 , ξ 3 )
It will be assumed that the components of any tensor considered here are differentiable with respect to the coordinates ξ1 , ξ2 , ξ3 of point Q. Theorem. Let f be a scalar field; the gradient of f is expressed by gradQ f =
∂f i g ∂ξi
[1.62]
The index Q in gradQ f recalls the fact that function f depends on point Q, which is useful in plate theory where functions under consideration may depend on different types of points (a current point in the volume of the body or a current point on the mid-surface). Theorem. Let v be a vector field; the gradient tensor of v is expressed by gradQ v =
∂v ⊗ gi ∂ξi
[1.63]
By combining this result with [1.56], we obtain other expressions for the gradient tensor: gradQ v = vi| j gi ⊗ g j = vi | j gi ⊗ g j
[1.64]
Theorem. Let v = vi gi = vi gi be a vector field; the divergence of v is expressed by divQ v = vi|i 1 ∂ √ i = √ gv g ∂ξi
[1.65]
recalling that g ≡ det[g.. ] ([g.. ] is the 3 × 3 matrix with components gi j ). Theorem. Let T = T i j gi ⊗ g j = T i j gi ⊗ g j = T i j. gi ⊗ g j = T .i j gi ⊗ g j be a 2nd-order tensor field. The divergence vector of T is ∂T i .g = T |i jj gi = T i j. | j gi i ∂ξ ij = T , j + T i j γkjk + T jk γijk gi 1 ∂ √ i j jk i = √ + T gT γ jk gi g ∂ξ j
divQ T =
[1.66]