There are two very different ways to introduce the notion of a tensor.
One is in terms of differential forms, especially the definition of the
total differential. This form is ultimately the most useful (and we
will dwell upon it below for this reason) but it is also algebraically
and intuitively the most complicated. The other way is by contemplating
the *outer product* of two vectors, otherwise known as a *dyad*.

We will introduce the dyad in a two dimensional Euclidean space with
Cartesian unit vectors, but it is a completely general idea and can be
used in an arbitrary
-manifold within a locally Euclidean patch.
Suppose one has a vector
and another vector
. If one simply multiplies these two vectors
together as an *outer product* (ordinary multiplication with the
distribution of the terms) one obtains the following result:
= A_x B_x + A_x B_y + A_y B_x + A_y B_y
This product of vectors is called a *dyadic*, and each pair of unit
vectors within is called a *dyad*.

A dyad is an interesting object. Each term appears to be formed out of
the ordinary multiplicative product of two numbers (which we can easily
and fully compute and understand) followed by a pair of unit vectors
that are juxtaposed. What, exactly does this juxtaposition of unit
vectors *mean*? We can visualize (sort of) what
by itself is
- it is a unit vector in the
direction that we can scale to turn
into all possible vectors that are aligned with the
-axis (or into
components of general vectors in the two dimensional space). It is not
so simple to visualize what a dyad
is in this way.

The function of such a product becomes more apparent when we define how
it works. Suppose with take the *inner product* (or scalar product,
or contraction) of our vector
with the elementary dyad
.
We can do this in either order (from either side):
·() = (·)= A_x
or
()·= (·) = A_x
We see that the inner product of a unit dyad
with a vector
serves to project out the *vector* that is the
-component of
(not the scalar magnitude of that vector
). The inner
product of a dyad with a vector is a vector.

What about the product of other dyads with
?
()·= (·) & = & A_y

·() = (·)& = & A_x
which are *not* equal. In fact, these terms seem to create the *new* vector components that might result from the interchange of the
and
components of the vector
, as do
etc.

Note well! Some of the dyads commute with respect to an inner product of the dyad with a vector, others (e.g. ) do not! Our generalized dyadic multiplication produces what appear to be ``intrinsically'' non-commutative results when contracted with vectors on the left or the right respectively.

This is in fact a break point - if we pursue this product in one
direction we could easily motivate and introduce *Geometric
Algebra*, in terms of which Maxwell's equations can be written in a
compact and compelling form. However, even without doing this, we can
arrive at that a compelling form (that is, in fact, quaternionic), so
we will restrain ourselves and only learn enough about tensors to be
able to pursue the usual tensor form without worrying about whether or
how it can be decomposed in a division algebra.

The thing to take out of the discussion so far is that in general the
inner product of a dyad with a vector serves to project out the *scalar amplitude* of the vector on the left or the right and reconstruct
a possibly *new* vector out of the remaining unit vector. Very
shortly we are going to start writing relations that sum over basis
vectors where the basis is not necessarily orthonormal (as this isn't
really necessary or desireable when discussing curvilinear coordinate
systems). To do this, I will introduce at this point the *Einstein
summation convention* where writing a product with repeated indices
implies summation over those indices:

(4.1) |

You can see how the summation symbol is in some sense redundant unless for some reason we wish to focus on a single term in the sum. In tensor analysis this is almost never the case, so it is easier to just specify the exceptions.

Note that we can form general dyadic forms directly from the unit dyads
without the intermediate step of taking the outer product of particular
vectors, producing terms like
. We can also take another outer product from the left or right
with all of these forms producing *tryads*, terms like
(eight terms total).
Furthermore we can repeat all of the arguments above in higher
dimensional spaces, e.g.
.

There is a clear one-to-one correspondance of these monad unit vectors to specific column vectors, e.g.: = ( ) = ( ) = ( )

This correspondance *continues* through the various unit dyads,
tryads:
= (
)
= (
)
and so on.

We will call all of these unit monads, dyads, tryads, and so on, as well
as the quantities formed by multiplying them by ordinary numbers and
summing them according to similar -adic type, *tensors*. As we can
see, there are several ways of *representing* tensors that all lead
to identical algebraic results, where one of the most compelling is the
matrix representation illustrated above. Note well that the feature
that differentiates tensors from ``ordinary'' matrices is that the
components correspond to particular -adic combinations of *coordinate directions* in some linear vector space; tensors will *change*, as a general rule, when the underlying coordinate description
is changed. Let us define some of the terms we will commonly use when
working with tensors.

The dimension of the matrix in a matrix representation of a tensor
quantity we call its *rank*. We have special (and yet familiar)
names for the first few tensor ranks:

**0th rank tensor**- or
*scalar*. This is an ``ordinary number'', which may at the very least be real or complex, and possibly could be numbers associated with geometric algebras of higher grade. It's characteristic defining feature is that is is*invariant*under transformations of the underlying coordinate system. All of the following are algebraic examples of scalar quantities: ... **1st rank tensor**- or
*vector*. This is a set of scalar numbers, each an amplitude corresponding to a particular unit vector or monad, and inherits its transformational properties from those of the underlying unit vectors. Examples:*not correspond to a power*but is rather a coordinate index corresponding to a*contra*variant (ordinary) vector where similarly corresponds to a*co*variant vector, and where covariance and contravariance will be defined below. **2nd rank tensor**- or
*matrix*(where is the dimension of the space, so the matrix has components). Examples: **3rd and higher rank tensors**- are the matrices with a rank corresponding to the number of indices required to describe it. In physics we will have occassion to use tensors through the fourth rank occasionally, through the third rank fairly commonly, although most of the physical quantities of interest will be tensors of rank 0-2. For examples we will simply generalize that of the examples above, using as a generic tensor form or (more often) explicitly indicating its indicial form as in or .

Using an indicial form with the Einstein summation convention is very powerful, as we shall see, and permits us to fairly simply represent forms that would otherwise involve a large number of nested summations over all coordinate indices. To understand precisely how to go about it, however, we have to first examine coordinate transformations.