There are two very different ways to introduce the notion of a tensor. One is in terms of differential forms, especially the definition of the total differential. This form is ultimately the most useful (and we will dwell upon it below for this reason) but it is also algebraically and intuitively the most complicated. The other way is by contemplating the outer product of two vectors, otherwise known as a dyad.
We will introduce the dyad in a two dimensional Euclidean space with
Cartesian unit vectors, but it is a completely general idea and can be
used in an arbitrary -manifold within a locally Euclidean patch.
Suppose one has a vector
and another vector
. If one simply multiplies these two vectors
together as an outer product (ordinary multiplication with the
distribution of the terms) one obtains the following result:
(6.1) |
A dyad is an interesting object. Each term appears to be formed out of the ordinary multiplicative product of two numbers (which we can easily and fully compute and understand) followed by a pair of unit vectors that are juxtaposed. What, exactly does this juxtaposition of unit vectors mean? We can visualize (sort of) what by itself is - it is a unit vector in the direction that we can scale to turn into all possible vectors that are aligned with the -axis (or into components of general vectors in the two dimensional space). It is not so simple to visualize what a dyad is in this way.
The function of such a product becomes more apparent when we define how
it works. Suppose with take the inner product (or scalar product,
or contraction) of our vector
with the elementary dyad
.
We can do this in either order (from either side):
(6.2) |
(6.3) |
What about the product of other dyads with
?
(6.4) | |||
(6.5) |
Note well! Some of the dyads commute with respect to an inner product of the dyad with a vector, others (e.g. ) do not! Our generalized dyadic multiplication produces what appear to be ``intrinsically'' non-commutative results when contracted with vectors on the left or the right respectively.
This is in fact a break point - if we pursue this product in one direction we could easily motivate and introduce Geometric Algebra, in terms of which Maxwell's equations can be written in a compact and compelling form. However, even without doing this, we can arrive at that a compelling form (that is, in fact, quaternionic), so we will restrain ourselves and only learn enough about tensors to be able to pursue the usual tensor form without worrying about whether or how it can be decomposed in a division algebra.
The thing to take out of the discussion so far is that in general the
inner product of a dyad with a vector serves to project out the scalar amplitude of the vector on the left or the right and reconstruct
a possibly new vector out of the remaining unit vector. Very
shortly we are going to start writing relations that sum over basis
vectors where the basis is not necessarily orthonormal (as this isn't
really necessary or desireable when discussing curvilinear coordinate
systems). To do this, I will introduce at this point the Einstein
summation convention where writing a product with repeated indices
implies summation over those indices:
(6.6) |
Note that we can form general dyadic forms directly from the unit dyads without the intermediate step of taking the outer product of particular vectors, producing terms like . We can also take another outer product from the left or right with all of these forms producing tryads, terms like (eight terms total). Furthermore we can repeat all of the arguments above in higher dimensional spaces, e.g. .
There is a clear one-to-one correspondance of these monad unit vectors
to specific column vectors, e.g.:
(6.7) |
(6.8) |
(6.9) |
This correspondance continues through the various unit dyads,
tryads:
(6.10) |
(6.11) |
We will call all of these unit monads, dyads, tryads, and so on, as well as the quantities formed by multiplying them by ordinary numbers and summing them according to similar -adic type, tensors. As we can see, there are several ways of representing tensors that all lead to identical algebraic results, where one of the most compelling is the matrix representation illustrated above. Note well that the feature that differentiates tensors from ``ordinary'' matrices is that the components correspond to particular -adic combinations of coordinate directions in some linear vector space; tensors will change, as a general rule, when the underlying coordinate description is changed. Let us define some of the terms we will commonly use when working with tensors.
The dimension of the matrix in a matrix representation of a tensor quantity we call its rank. We have special (and yet familiar) names for the first few tensor ranks:
Using an indicial form with the Einstein summation convention is very powerful, as we shall see, and permits us to fairly simply represent forms that would otherwise involve a large number of nested summations over all coordinate indices. To understand precisely how to go about it, however, we have to first examine coordinate transformations.