Algebra^{13} is a *reasoning process* that is one of
the fundamental cornerstones of mathematical reasoning. As far as we
are concerned, it consists of two things:

- Representing
*numbers*of one sort or another (where we could without loss of generality assume that they are*complex*numbers, since real numbers are complex, rational and irrational numbers are real, integers are rational, and natural numbers are integer) with*symbols*. In physics this representation isn't only a matter of knowns and unknowns - we will often use algebraic symbols for numbers we know or for parameters in problems even when their value is actually given as part of the problem. In fact, with only a relatively few exceptions, we will prefer to use symbols as much as we can to permit our algebraic manipulations to eliminate as much eventual*arithmetic*(computation involving actual numbers) as possible. - Performing a sequence of
*algebraic transformations*of a set of equations or inequalities to convert it from one form to another (desired) form. These transformations are generally based on the set of arithmetic operations defined (and permitted!) over the field(s) of the number type(s) being manipulated.

Note well that it isn't always a matter of solving for some unknown
variable. Algebra is just as often used to derive relations and hence
gain insight into a system being studied. Algebra is in some sense the
*language* of physics.

The transformations of algebra applied to equalities (the most common case) can be summarized as follows (non-exhaustively). If one is given one or more equations involving a set of variables one can:

- Add any scalar number or well defined and consistent symbol to
both sides of any equation. Note that in physics problems, symbols
carry units and it is necessary to add only symbols that
*have the same units*as we cannot, for example, add seconds to kilograms and end up with a result that makes any sense! - Subtract any scalar number or consistent symbol ditto. This isn't really a separate rule, as subtraction is just adding a negative quantity.
- Multiplying both sides of an equation by any scalar number or
consistent symbol. In physics one
*can*multiply symbols with different units, such an equation with (net) units of meters times symbols given in seconds. - Dividing both sides of an equation ditto, save that one has to be careful when performing symbolic divisions to avoid points where division is not permitted or defined (e.g. dividing by zero or a variable that might take on the value of zero). Note that dividing one unit by another in physics is also permitted, so that one can sensibly divide length in meters by time in seconds.
- Taking both sides of an equation to any power. Again some care
must be exercised, especially if the equation can take on negative or
complex values or has any sort of domain restrictions. For fractional
powers, one may well have to specify the
*branch*of the result (which of many possible roots one intends to use) as well. - Placing the two sides of any equality into
*almost*any functional or algebraic form, either given or known, as if they are variables of that function. Here there are some serious caveats in both math and physics. In physics, the most important one is that if the functional form has a power-series expansion then the equality one substitutes in must be*dimensionless*. This is easy to understand. Supposed I know that is a length in meters. I*could*try to form the exponential of : , but if I expand this expression, which is*nonsense!*How can I add meters to meters-squared? I can only exponentiate if it is dimensionless. In mathematics one has to worry about the domain and range. Suppose I have the relation where is a real (dimensionless) expression, and I wish to take the of both sides. Well, the*range*of cosine is only to , and my function is clearly strictly larger than 2 and cannot have an inverse cosine! This is obviously a powerful, but*dangerous*tool.