We seek (Lie) groups of continous linear transformations, x' = T_a x or x'^&mu#mu;= f^&mu#mu;(x;a) for . We require that the are real numbers (parameters) that characterize the transformation. must be minimal (``essential'').
Examples of transformations of importance in physics (that you should
already be familiar with) include
x' & = & T_d x
& = & x + d where . This is the ( parameter) translation group in dimensions. Also, x'^i = R_ij x^j where R R = I, R > 0, i = 1,2,3 is the (three parameter) rotation group.
An infinitesimal transformation in one of the parameters is defined by T_a(0) + &epsi#epsilon; = I_&epsi#epsilon;+ O(&epsi#epsilon;^2) . In this definition, are the ( -parameter) values associated with the identity transformation . These can be chosen to be zero by suitably choosing the parameter coordinates. The infinitesimal parameters are taken to zero, so that (summed) is neglible. Thus I_&epsi#epsilon;= I + &epsi#epsilon;_u Q_u where Q_u = f^&mu#mu;_u(x) x^&mu#mu; and f^&mu#mu;_u(x) = . f^&mu#mu;(x,a)a_u |_a = a(0) .
Putting this all together,
x' & = & (T_a(0) + &epsi#epsilon;) x = ( I + &epsi#epsilon;_u Q_u ) x
& = & Ix + &epsi#epsilon;_u Q_u x
& = & x + &epsi#epsilon;_u . f^&mu#mu;(x,a)a_u |_a = a(0) xx^&mu#mu; (summed over in four dimensional space-time and ). Thus (unsurprisingly) x'^&mu#mu;= x^&nu#nu;&delta#delta;^&mu#mu;_&nu#nu;+ &epsi#epsilon;_u . f^&nu#nu;a_u |_a=a(0) g^&mu#mu;_&nu#nu; which has the form of the first two terms of a Taylor series. This is characteristic of infinitesimal linear transformations.
One can easily verify that I_&epsi#epsilon;I_&epsi#epsilon;' = I_&epsi#epsilon;' I_&epsi#epsilon; (infinitesimal transformations commute) and that I_&epsi#epsilon;^-1 = I_- &epsi#epsilon; (to order ). They thus have an identity, an inverse, and can be shown to be associative.
The continuous transformation group (mentioned above) follows immediately from making (the displacement of coordinates) infinitesimal and finding finite displacements by integration. The rotation group (matrices) are a little trickier. They are I_&epsi#epsilon;= I + gS where S = -S, S_ij = &epsi#epsilon;_k . R_ija_k |_a(0 . The infinitesimal are antisymmetric and traceless (in 3D), so they have only three independent parameters (that are thus ``essential''). We can write them generally as S_ij = &epsi#epsilon;_ijk d&omega#omega;_k where the is the infinitesimal parameter and where is the antisymmetric unit tensor. Thus, if dx_i = x_i' - x_i = S_ij x_j = &epsi#epsilon;_ijk x_j d&omega#omega;_k we see that d= ×d A moment of thought should convince you that is the infinitesimal (vector) rotation angle, with direction that points along the axis of rotation.
To obtain the rotation group we must show that every rotation can be obtained by integrating . This follows by writing an arbitrary rotation or product of rotations as a single rotation about a fixed axis. For parallel to this axis , this is obviously true, as I show next. Since any rotation can be written this way, the rotations indeed form a group.
The integration proceeds like: R_ = _ &Delta#Delta;&omega#omega;&rarr#rightarrow;0 (R_&Delta#Delta;&omega#omega;)^&Omega#Omega;/&Delta#Delta;&omega#omega; where and . We can parameterize this as R_ = _m &rarr#rightarrow;&infin#infty; (I + 1m &Omega#Omega; S_0)^m = e^&Omega#Omega;S_0 where (S_0)_ij = &epsi#epsilon;_ijk &Delta#Delta;&omega#omega;_k&Delta#Delta;&omega#omega; . Believe it or not, this was one of the primary things we wanted to show in this aside. What it shows is that rotations about an arbitrary axis can be written as an exponential that can be thought of as the infinite product of a series of infinitesimal transformations where each transformation has various nice properties.
With these known results from simpler days recalled to mind, we return to the homogeneous, proper Lorentz group. Here we seek the infinitesimal linear transformations, etc. in four dimensions. Algebraically one proceeds almost identically to the case of rotation, but now in four dimensions and with the goal of preserving length in a different metric. A general infinitesimal transformation can be written compactly as: I_&epsi#epsilon;= I + g L where (as before) (and hence is traceless), is infinitesimal, and where is the usual metric tensor (that follows from all the annoying derivatives with respect to the parameters and coordinates).
Thus A = _m &rarr#rightarrow;&infin#infty; ( I + 1m L )^m = e^L defines the form of a general transformation matrix associated with a given ``direction'' in the parameter space constructed from an infinite product of infinitesimal transformations, each of which is basically the leading term of a Taylor series of the underlying coordinate function transformation in terms of the parameters. This justifies the ``ansatz'' made by Jackson. The matrices are called the generators of the linear transformation.
Thus, whenever we write A = e^L where the 's are (to be) the generators of the Lorentz group transformations we should remember what it stands for. Let's find the distinct . Each one is a real, traceless matrix that is (as we shall see) antisymmetric in the spatial part (since is antisymmetric from the above).
To construct (and find the distinct components of ) we make use of its properties. Its determinant is A = (e^L) = e^Tr L = ±1 (This follows from doing a similarity transformation to put in diagonal form. is necessarily then diagonal. Similarity transformations do not alter the determinant, because S^-1 M S = S^-1 M S = M . If is diagonal, then the last equation follows from the usual properties of the exponential and the definition of the exponential of a matrix.)
If is real then is excluded by this result. If is traceless (and only if, given that it is real), then A = +1 which is required to be true for proper Lorentz transformations (recall from last time). Making a traceless 4x4 matrix therefore suffices to ensure that we will find only proper Lorentz transformations.
Think back to the requirement that: Ã g A = g in order to preserve the invariant interval where g = ( ) and is a real, traceless, matrix.
If we multiply from the right by and the left by , this equation is equivalent also to g Ã g = A^-1. Since , , and : g Ã g = e^g^2 L = e^g L g = e^-L or g L g = -L . (This can also easily be proven by considering the ``power series'' or product expansions of the exponentials of the associated matrices above, changing the sign/direction of the infinitesimal series.)
Finally, if we multiply both sides from the left by and express the left hand side as a transpose, we get gL = - gL . From this we see that the matrix is traceless and antisymmetric as noted/expected from above. If we mentally factor out the , we can without loss of generality write as: L = ( ) . This matrix form satisfies all the constraints we deduced above for the generators. Any of this form will make an that preserves the invariant interval (length) of a four vector. There are exactly six essential parameters as expected. Finally, if we use our intuition, we would expect that the for form the rotation subgroup and describe physical rotations.
So this is just great. Let us now separate out the individual couplings for our appreciation and easy manipulation. To do that we define six fundamental matrices (called the generators of the group from which we can construct an arbitrary and hence . They are basically the individual matrices with unit or zero components that can be scaled by the six parameters . The particular choices for the signs make certain relations work out nicely:
S_1 = ( )
S_2 = ( )
S_3 = ( )
K_1 = ( )
K_2 = ( )
K_3 = ( ) .
generate rotations in the spatial part and the
generate boosts. Note that the squares of these matrices
are diagonal and either
in the submatrix involved:
S_1^2 = ( )
K_1^2 = ( ),
etc. From this we can deduce that
S_i^3 & = & -S_i
K_i^3 & = & K_i . Note that these relations are very similar to the multiplication rules for unit pure complex or pure real numbers.
The reason this is important is that if we form the dot product of a vector of these generators with a spatial vector (effectively decomposing a vector parameter in terms of these matrices) in the exponential expansion, the following relations can be used to reduce powers of the generators. ( · )^3 = - · and ( · )^3 = · In these expressions, an arbitrary unit vector, and these expressions effectively match up the generator axes (which were arbitrary) with the direction of the parameter vector for rotation or boost respectively. After the reduction (as we shall see below) the exponential is, in fact, a well-behaved and easily understood matrix!
It is easy (and important!) to determine the commutation relations of these
generators. They are:
[ S_i,S_j ] & = & &epsi#epsilon;_ijk S_k
[ S_i,K_j ] & = & &epsi#epsilon;_ijk K_k
[ K_i,K_j ] & = & - &epsi#epsilon;_ijk S_k . The first set are immediately recognizable. They tells us that ``two rotations performed in both orders differ by a rotation''. The second and third show that ``a boost and a rotation differ by a boost'' and ``two boosts differ by a rotation'', respectively. In quotes because that is somewhat oversimplified, but it gets some of the idea across.
These are the generators for the groups or . The latter is the group of relativity as we are currently studying it.
A question that has been brought up in class is ``where is the factor in the generators of rotation'' so that as we might expect from considering spin and angular momentum in other contexts. It is there, but subtly hidden, in the fact that in the projective block of the rotation matrices only. Matrices appear to be a way to represent geometric algebras, as most readers of this text should already know from their study of the (quaternionic) Pauli spin matrices. We won't dwell on this here, but note well that the Pauli matrices are isomorphic to the unit quaternions via the mapping , , , as the reader can easily verify18.2 Note well that: &sigma#sigma;_3 &sigma#sigma;_1 = ( ) is both real and, not at all coincidentally, the structure of an sub-block.
With these definitions in hand, we can easily decompose in terms of the and the matrices. We get: L = - · - · where is a (finite) rotation around an axis in direction and where is a (finite) boost in direction . Thus the completely general form of is A = e^- · - · . The (cartesian) components of and are now the six free parameters of the transformation.
Let us see that these are indeed the familiar boosts and rotations we are used
to. After all, this exponential notation is not transparent. Suppose that
A & = & e^L = I - &xi#xi;K_1 + 12!( &xi#xi;K_1 )^2 -
13! ( &xi#xi;K_1 )^3 + ...
& = & (I - K_1^2) - K_1(&xi#xi;+ 13! &xi#xi;^3 + ...) + K_1^2(I + 12! &xi#xi;^2 + ...)
& = & (I - K_1^2) - K_1 (&xi#xi;) + K_1^2 (&xi#xi;) or (in matrix form) A = ( ). which (ohmygosh!) is our old friend the Lorentz transformation, just like we derived it a la kiddy-physics-wise. As an exercise, show that the result is a rotation around the axis. Note that the step of ``adding and subtracting'' is essential to reconstructing the series of the sine and cosine, just like the was above for cosh and sinh.
Now, a boost in an arbitrary direction is just A = e^- · . We can certainly parameterize it by = &beta#beta; ^-1 &beta#beta; (since we know that , inverting our former reasoning for . Then A( ) = e^- &beta#beta; · ^-1 &beta#beta; .
I can do no better than quote Jackson on the remainder:
``It is left as an exercise to verify that ...''
) = ( )
(etc.) which is just the explicit full matrix form of
x^0 &prime#prime; & = & &gamma#gamma;(x^0 -
' & = & + (&gamma#gamma;- 1)&beta#beta;^2( · ) - &gamma#gamma; x^0 from before.
Now, we have enough information to construct the exact form of a simultaneous boost and rotation, but this presents a dual problem. When we go to factorize the results (like before) the components of independent boosts and rotations do not commute! If you like, A( ,0) A(0, ) &ne#ne;A(0, ) A( ,0) and we cannot say anything trivial like A( , ) = A( ,0) A(0, ) since it depends on the order they were performed in! Even worse, the product of two boosts is equal to a single boost and a rotation (if the boosts are not in the same direction)!
The worst part, of course, is the algebra itself. A useful exercise for the algebraically inclined might be for someone to construct the general solution using, e.g. - mathematica.
This suggests that for rotating relativistic systems (such as atoms or orbits around neutron stars) we may need a kinematic correction to account for the successive frame changes as the system rotates.
The atom perceives itself as being ``elliptically deformed''. The consequences of this are observable. This is known as ``Thomas precession''.