A commonly occurring relation in many of the identities of interest - in particular the triple product - is the so-called epsilon-delta identity:

Note well that this is the *contraction*^{3.2} of two third rank tensors.! The result has the
remaining four indices. Also note well that one can use this identity
when summing over two indices that do not ``line up'' according to this
basic identity by permuting the indices in a cyclic or anticyclic (with
an extra minus sign) way until they do. So one can evaluate:

by using and .

An example of how to use this follows. Suppose we wish to prove that:

Let's write the first term using our new notation

where I left in parentheses to make it comparatively easy to track the conversion. We can now use the delta function to eliminate the in favor of the :

where we can now reorder terms and indices in the product freely as long as we follow the cyclic permutation rule above in the tensor when we alter the tensor connecting them. Finally, we re-insert a (redundant) function and parentheses:

Obviously the third form follows just from applying this rule and renaming the vectors.

This same approach can be used to prove the BAC-CAB rule. There are a number of equivalent paths through the algebra. We will leave the proof to the student, after giving them a small push start. First:

has

where the term in parentheses is the th component of . We ignore the parentheses and permute the repeated index to the first slot:

Apply the identity above:

We apply the delta function rules to eliminate all of the and combinations in favor of and :

which is true for all three components of the vectors represented on both sides, Q.E.D.

In case this last step is obscure, note that one way to ring a unit vector into Einstein notation is to use a general symbol for unit vectors. A common one is , where , , where one can see immediately the problem with using in any cartesian tensor theory where one plans to use Einstein summation - one of several reasons I do not care for them (they also can conflict with e.g. or the wave number, where is unambiguously associated with or ). The last step can now be summed as:

This general approach will prove very useful when one needs to prove the related vector differential identities later on. Without it, tracking and reordering indices is very tedious indeed.

We have at this point covered several kinds of ``vector'' products, but have omitted what in some ways is the most obvious one. The outer product where the product of and is just the same way the scalar product of and is . However, this form is difficult to interpret. What kind of object, exactly, is the quantity , two vectors just written next to each other?

It is a tensor, and it is time to learn just what a tensor is (while learning a bunch of new and very interesting things along the way).