next up previous contents
Next: Coordinate Transformations Up: Tensors Previous: Tensors   Contents

The Dyad and $ N$ -adic Forms

There are two very different ways to introduce the notion of a tensor. One is in terms of differential forms, especially the definition of the total differential. This form is ultimately the most useful (and we will dwell upon it below for this reason) but it is also algebraically and intuitively the most complicated. The other way is by contemplating the outer product of two vectors, otherwise known as a dyad.

We will introduce the dyad in a two dimensional Euclidean space with Cartesian unit vectors, but it is a completely general idea and can be used in an arbitrary $ n$ -manifold within a locally Euclidean patch. Suppose one has a vector $ \vA = A_x \hx + A_y \hy$ and another vector $ \vB = B_x \hx + B_y \hy$ . If one simply multiplies these two vectors together as an outer product (ordinary multiplication with the distribution of the terms) one obtains the following result: = A_x B_x + A_x B_y + A_y B_x + A_y B_y This product of vectors is called a dyadic, and each pair of unit vectors within is called a dyad.

A dyad is an interesting object. Each term appears to be formed out of the ordinary multiplicative product of two numbers (which we can easily and fully compute and understand) followed by a pair of unit vectors that are juxtaposed. What, exactly does this juxtaposition of unit vectors mean? We can visualize (sort of) what $ \hx$ by itself is - it is a unit vector in the $ x$ direction that we can scale to turn into all possible vectors that are aligned with the $ x$ -axis (or into components of general vectors in the two dimensional space). It is not so simple to visualize what a dyad $ \hx \hx$ is in this way.

The function of such a product becomes more apparent when we define how it works. Suppose with take the inner product (or scalar product, or contraction) of our vector $ \vA$ with the elementary dyad $ \hx \hx$ . We can do this in either order (from either side): ·() = (·)= A_x or ()·= (·) = A_x We see that the inner product of a unit dyad $ \hx \hx$ with a vector serves to project out the vector that is the $ x$ -component of $ \vA$ (not the scalar magnitude of that vector $ A_x$ ). The inner product of a dyad with a vector is a vector.

What about the product of other dyads with $ \vA$ ? ()·= (·) & = & A_y
·() = (·)& = & A_x which are not equal. In fact, these terms seem to create the new vector components that might result from the interchange of the $ x$ and $ y$ components of the vector $ \vA$ , as do $ (\hy\hx)\cdot \vA =
A_x\hy$ etc.

Note well! Some of the dyads commute with respect to an inner product of the dyad with a vector, others (e.g. $ \hx\hy$ ) do not! Our generalized dyadic multiplication produces what appear to be ``intrinsically'' non-commutative results when contracted with vectors on the left or the right respectively.

This is in fact a break point - if we pursue this product in one direction we could easily motivate and introduce Geometric Algebra, in terms of which Maxwell's equations can be written in a compact and compelling form. However, even without doing this, we can arrive at that a compelling form (that is, in fact, quaternionic), so we will restrain ourselves and only learn enough about tensors to be able to pursue the usual tensor form without worrying about whether or how it can be decomposed in a division algebra.

The thing to take out of the discussion so far is that in general the inner product of a dyad with a vector serves to project out the scalar amplitude of the vector on the left or the right and reconstruct a possibly new vector out of the remaining unit vector. Very shortly we are going to start writing relations that sum over basis vectors where the basis is not necessarily orthonormal (as this isn't really necessary or desireable when discussing curvilinear coordinate systems). To do this, I will introduce at this point the Einstein summation convention where writing a product with repeated indices implies summation over those indices:

$\displaystyle \vA = \sum_i A_i \hx_i = A_i \hx_i$ (4.1)

You can see how the summation symbol is in some sense redundant unless for some reason we wish to focus on a single term in the sum. In tensor analysis this is almost never the case, so it is easier to just specify the exceptions.

Note that we can form general dyadic forms directly from the unit dyads without the intermediate step of taking the outer product of particular vectors, producing terms like $ \{\hx \hx, \hx \hy, \hy \hx, \hy
\hy\}$ . We can also take another outer product from the left or right with all of these forms producing tryads, terms like $ \{\hx \hx
\hx, \hx \hy \hx, ... \hy \hx \hy, \hy \hy \hy\}$ (eight terms total). Furthermore we can repeat all of the arguments above in higher dimensional spaces, e.g. $ \{\hx \hx, \hx \hy, \hx \hz,...,\hz \hz\}$ .

There is a clear one-to-one correspondance of these monad unit vectors to specific column vectors, e.g.: = ( $ \begin{array}{c}
1 \\
0 \\
0
\end{array}$ ) = ( $ \begin{array}{c}
0 \\
1 \\
0
\end{array}$ ) = ( $ \begin{array}{c}
0 \\
0 \\
1
\end{array}$ )

This correspondance continues through the various unit dyads, tryads: = ( $ \begin{array}{c c c}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}$ ) = ( $ \begin{array}{c c c}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}$ ) and so on.

We will call all of these unit monads, dyads, tryads, and so on, as well as the quantities formed by multiplying them by ordinary numbers and summing them according to similar -adic type, tensors. As we can see, there are several ways of representing tensors that all lead to identical algebraic results, where one of the most compelling is the matrix representation illustrated above. Note well that the feature that differentiates tensors from ``ordinary'' matrices is that the components correspond to particular -adic combinations of coordinate directions in some linear vector space; tensors will change, as a general rule, when the underlying coordinate description is changed. Let us define some of the terms we will commonly use when working with tensors.

The dimension of the matrix in a matrix representation of a tensor quantity we call its rank. We have special (and yet familiar) names for the first few tensor ranks:

0th rank tensor
or scalar. This is an ``ordinary number'', which may at the very least be real or complex, and possibly could be numbers associated with geometric algebras of higher grade. It's characteristic defining feature is that is is invariant under transformations of the underlying coordinate system. All of the following are algebraic examples of scalar quantities: $ x, 1.182, \pi ,
A_x , \Vec{A}\cdot\Vec{B}$ ...
1st rank tensor
or vector. This is a set of scalar numbers, each an amplitude corresponding to a particular unit vector or monad, and inherits its transformational properties from those of the underlying unit vectors. Examples: $ \Vec{A} = A_x \hx + A_y \hy,
\{x_i\},\{x^i\},$

\begin{displaymath}
\hz = \left (
\begin{array}{c}
A_x \\
A_y \\
A_z
\end{array}\right )
\end{displaymath}

where the $ i$ in e.g. $ x^i$ does not correspond to a power but is rather a coordinate index corresponding to a contravariant (ordinary) vector where $ x_i$ similarly corresponds to a covariant vector, and where covariance and contravariance will be defined below.
2nd rank tensor
or $ D\times D$ matrix (where $ D$ is the dimension of the space, so the matrix has $ D^2$ components). Examples: $ C_{xy}\hx\hy, \Vec{A}\Vec{B}, \Mat{C}, A_{ij}, A_i^j, A^{ij},$

\begin{displaymath}
\Mat{A} = \left (
\begin{array}{c c c}
A_{xx} & A_{xy} & A_{...
...{yy} & A_{yz} \\
A_{zx} & A_{zy} & A_{zz}
\end{array}\right )
\end{displaymath}

where again in matrix context the indices may be raised or lowered to indicate covariance or contravariance in the particular row or column.
3rd and higher rank tensors
are the $ D \times D \times D...$ matrices with a rank corresponding to the number of indices required to describe it. In physics we will have occassion to use tensors through the fourth rank occasionally, through the third rank fairly commonly, although most of the physical quantities of interest will be tensors of rank 0-2. For examples we will simply generalize that of the examples above, using $ \Mat{T}$ as a generic tensor form or (more often) explicitly indicating its indicial form as in $ T_{111},T_{112},...$ or $ \epsilon _{ijk}$ .

Using an indicial form with the Einstein summation convention is very powerful, as we shall see, and permits us to fairly simply represent forms that would otherwise involve a large number of nested summations over all coordinate indices. To understand precisely how to go about it, however, we have to first examine coordinate transformations.


next up previous contents
Next: Coordinate Transformations Up: Tensors Previous: Tensors   Contents
Robert G. Brown 2017-07-11