next up previous contents
Next: Coordinate Systems Up: Scalar and Vector Calculus Previous: Vector Integration by Parts   Contents

Integration By Parts in Electrodynamics

There is one essential theorem of vector calculus that is essential to the development of multipoles - computing the dipole moment. In his book, Classical Electrodynamics, Jackson (for example) blithely integrates by parts (for a charge/current density with compact support) thusly: &int#int;_^3 \Vec{J} d^3x = -&int#int;_^3 (\Vec{x} \Vec{J} ) d^3x

Then, using the continuity equation and the fact that $ \rho$ and $ \Vec{J}$ are presumed harmonic with time dependenct $ \exp(-i\omega
t)$ , we substitute $ \deldot \Vec{J} = -\partialdiv{\rho}{t} = i\omega \rho $ to obtain: &int#int;_^3 \Vec{J} d^3x = -i&omega#omega;&int#int;_^3 \Vec{x} &rho#rho;(\Vec{x} ) d^3x = -i&omega#omega;\Vec{p} where $ \Vec{p}$ is the dipole moment of the fourier component of the charge density distribution.

However, this leaves a nasty question: Just how does this integration by parts work? Where does the first equation come from? After all, we can't rely on always being able to look up a result like this, we have to be able to derive it and hence learn a method we can use when we have to do the same thing for a different functional form.

We might guess that deriving it will use the divergence theorem (or Green's theorem(s), if you like), but any naive attempt to make it do so will lead to pain and suffering. Let's see how it goes in this particularly nasty (and yet quite simple) case.

Recall that the idea behind integration by parts is to form the derivative of a product, distribute the derivative, integrate, and rearrange: d(uv) & = & u dv + v du
&int#int;_a^b d(uv) & = & &int#int;_a^b u dv + &int#int;_a^b v du
&int#int;_a^b u dv & = & (uv)|_a^b - &int#int;_a^b v du where if the products $ u(a)v(a) = u(b)v(b) = 0$ (as will often be the case when $ a = -\infty, b = \infty$ and $ u$ and $ v$ have compact support) the process ``throws the derivative from one function over to the other'': &int#int;_a^b u dv = - &int#int;_a^b v du which is often extremely useful in evaluating integrals.

The exact same idea holds for vector calculus, except that the idea is to use the divergence theorem to form a surface integral instead of a boundary term. Recall that there are many forms of the divergence theorem, but they all map $ \del$ to $ \Hat{n}$ in the following integral form: &int#int;_V ... d^3x &rarr#to;&conint#oint;_S/V \Hat{n} ... d^2x or in words, if we integrate any form involving the pure gradient operator applied to a (possibly tensor) functional form indicated by the ellipsis ... in this equation, we can convert the result into an integral over the surface that bounds this volume, where the gradient operator is replaced by an outward directed normal but otherwise the functional form of the expression is preserved. So while the divergence theorem is: &int#int;_V \Vec{A} d^3x = &conint#oint;_S/V \Hat{n} ·\Vec{A} d^2x there is a ``gradient theorem'': &int#int;_V f d^3x = &conint#oint;_S/V \Hat{n} f d^2x and so on.

To prove Jackson's expression we might therefore try to find a suitable product whose divergence contains $ \Vec{J}$ as one term. This isn't too easy, however. The problem is finding the right tensor form. Let us look at the following divergence: (x\Vec{J} ) & = & x ·\Vec{J} + x \Vec{J}
& = & J_x + x \Vec{J} This looks promising; it is the $ x$ -component of a result we might use. However, if try to apply this to a matrix dyadic form in what looks like it might be the right way: (\Vec{x} \Vec{J} ) & = & (\Vec{x} ) \Vec{J} + \Vec{x} (\Vec{J} )
& = & 3\Vec{J} + \Vec{x} (\Vec{J} ) we get the wrong answer.

To assemble the right answer, we have to sum over the three separate statements:

((x\Vec{J} ))\Hat{x} & = & (J_x + x \Vec{J} ) \Hat{x}
+ ((y\Vec{J} ))\Hat{y} & = & + (J_y + y \Vec{J} ) \Hat{y}
+ ((z\Vec{J} ))\Hat{z} & = & + (J_z + z \Vec{J} ) \Hat{z}
or &sum#sum;_i \Hat{x_i} ((x_i\Vec{J} ) ) = \Vec{J} + \Vec{x} (\Vec{J} ) which is the sum of three divergences, not a divergence itself. If we integrate both sides over all space we get: &int#int;_^3 &sum#sum;_i \Hat{x_i} ((x_i\Vec{J} ) ) d^3x & = & &int#int;_^3 \Vec{J} d^3x + &int#int;_^3 \Vec{x} (\Vec{J} ) d^3x
&sum#sum;_i \Hat{x_i} &int#int;_S(&infin#infty;) (\Hat{n} ·(x_i\Vec{J} ) ) d ^2 x & = & &int#int;_^3 \Vec{J} d^3x + &int#int;_^3 \Vec{x} (\Vec{J} ) d^3x
&sum#sum;_i \Hat{x_i} 0 & = & &int#int;_^3 \Vec{J} d^3x + &int#int;_^3 \Vec{x} (\Vec{J} ) d^3x
0 & = & &int#int;_^3 \Vec{J} d^3x + &int#int;_^3 \Vec{x} ( \Vec{J} ) d^3x where we have used the fact that $ \Vec{J}$ (and $ \rho$ ) have compact support and are zero everywhere on a surface at infinity.

We rearrange this and get: &int#int;_^3 \Vec{J} d^3x = - &int#int;_^3 \Vec{x} ( \Vec{J} ) d^3x which is just what we needed to prove the conclusion.

This illustrates one of the most difficult examples of using integration by parts in vector calculus. In general, seek out a tensor form that can be expressed as a pure vector derivative and that evaluates to two terms, one of which is the term you wish to integrate (but can't) and the other the term you want could integrate if you could only proceed as above. Apply the generalized divergence theorem, throw out the boundary term (or not - if one keeps it one derives e.g. Green's Theorem(s), which are nothing more than integration by parts in this manner) and rearrange, and you're off to the races.

Note well that the tensor forms may not be trivial! Sometimes you do have to work a bit to find just the right combination to do the job.


next up previous contents
Next: Coordinate Systems Up: Scalar and Vector Calculus Previous: Vector Integration by Parts   Contents
Robert G. Brown 2017-07-11