next up previous contents
Next: The Pit of Existential Up: Paradoxes Previous: Paradoxes   Contents

Fun with Logic: Contradictions and Null Results

The need for a null set $\mu$ isn't something I'm just making up. No matter what you call it, it has been around for a very long time in mathematics and logic. It has been around for a somewhat shorter time in computer languages, because computers themselves are relatively new, but (as pointed out by Jaynes) a ``robot'' (or computer) is an excellent model for logic because of the entirely practical constraints associated with engineering it to actually work.

Computers are nearly ideal embodiments of the raw mechanics of logic and arithmetic. Computers more or less directly implement a system of boolean logic as the underlying basis for their operation - all successful operations have an associated ``truth table'' and can be described by a set of permissble binary transformations with the representation. However, the representation itself is finite and discrete and can be self-referential - one can program a computer to program a computer and work other dark magic (some of which is covered from time to time by such luminaries as Martin Gardner in Scientific American).

Computer programmers typically learn in their very first class - the hard way - about the null set $\mu$ as the result of certain operations. Computers can easily be asked to take perfectly ordinary and reasonable results as input to operations that produce no meaningful output. That is, they have to deal with ``Not a Number'' (NaN) return conditions in addition to other (possibly domain specific) undefined or oscillatory conditions all the time.

For example, there is nothing to prevent algebraically well formed computer programs from attempting to divide by zero or take the inverse sine of 2.0, things that are algebraically sensible but that happen to be undefined for particular values or ranges of inputs. Indeed, it happens all the time, as people write a program without checking and the program manages to generate a forbidden value as input. Similarly it is easy to write a program that enters a loop such that the value that is tested to determine when to terminate the loop oscillates and hence never reaches the end condition with a well defined answer. One might even write such a program deliberately, if the program has some desired output that is incidentally generated along the way, and terminate the program only by killing it from ``outside the Universe'' of the program, with a Ctrl-C from the keyboard.

Once a NaN or indeterminate state is introduced into any computation by any means, all operations that include it in any way become equally undefined and result in NaN as well. In fact, I was very tempted to use NaE (Not an Element - of a set) instead of $\mu$ in the formulation above to draw careful attention to the correspondance.

Nor is the abstraction of the null set unknown in formal logic. $(A\
\&\ !A)$ is the essential statement of contradiction7.17.2

If you ring a contradiction into a theory (however subtly) you can prove anything you like symbolically7.3. The argument goes something like this:

And you thought logic wasn't good for anything!

We've just proven that God exists and is a Penguin7.4 all in one simple derivation7.5! Bertrand Russell would be so proud...7.6

Or rather, we've proven no such thing. The point is that the statement ``Robert Brown is a wise fool'' isn't true, and it isn't false. It is a premise, an axiom from which the argument proceeds. It happens to be an inconsistent axiom, but so what? Who would doubt that Robert Brown is wise is at least sometimes true, or somewhat true all of the time. Similarly, who could doubt that Robert Brown is a fool is at least sometimes true, or somewhat true all of the time7.7. Socrates used to do this kind of thing to poor Thrasymachus in Plato's Republic about the same place that he offered to go shave lions that don't shave themselves7.8.

In any event, if you use a contradiction implicitly, explicitly, accidentally or on purpose, or somehow insert any other undecideable Gödelian proposition in in place of the contradiction in a way that performs an equivalent purpose, your final result isn't ``true'', it is a contradiction or it is itself undecideable. In particular, both ``and'' and ``or'' operations involving a contradiction are a contradiction. A contradiction contaminates anything it touches, because asserting it is ``true'' or ``false'' gets you into equal amounts of trouble, because there are always clever ways of inserting it into logical statements that corrupt the downstream processing of the truth tables. It isn't just that the intersection of your final answer and the set of all true answers is either your final answer (shown true) or the empty set (from the law of exclusion, shown false). The law of exclusion is directly violated by the presumed condition, and the entire result is null.

Finally, just for fun (in case you, dear reader, have never seen it), consider the following algebra. Suppose I suppose that I have two variables, $x$ and $y$ that are equal. Then:


$\displaystyle x$ $\textstyle =$ $\displaystyle y$ (7.1)
$\displaystyle xy$ $\textstyle =$ $\displaystyle y^2$ (7.2)
$\displaystyle -xy$ $\textstyle =$ $\displaystyle -y^2$ (7.3)
$\displaystyle x^2 - xy$ $\textstyle =$ $\displaystyle x^2 - y^2$ (7.4)
$\displaystyle x(x - y)$ $\textstyle =$ $\displaystyle (x + y)(x - y)$ (7.5)
$\displaystyle x$ $\textstyle =$ $\displaystyle (x + y)$ (7.6)
$\displaystyle x$ $\textstyle =$ $\displaystyle 2x$ (7.7)
$\displaystyle 1$ $\textstyle =$ $\displaystyle 2$ (7.8)

Nothing up my sleeve, right? Or is there? Every algebraic operation is perfectly lovely and valid without question but one, which has an undefined exception that has been delicately inserted into the proof. The trick line is the one where I cancel $(x - y)$ from both sides of the fifth equation. $x = y$, right? So this is zero. I'm dividing by zero, which is an operation with an undefined answer and it should come as no surprise that I can prove one undefined, contradictory, null result (that 1 = 2) using another implicitly in the algebra.

As is the case with computer programs, algebraic or logical or set operations that lead to NaE or $\mu$ are usually indicative not of a failure in the ``computational logic'' as embodied in the computer hardware, they are not necessarily due to a failure in static syntactical validity of the program, they are due to a failure or mismatch between the program and its data - a domain failure. One way to view $A\ \& !A$ (and other $\mu$) is that they are bad data, ta that cannot exist in the ``Universe'' of set (or logical, or algebraic, or computational) operations we are considering.

This is an intriguing viewpoint, but (of course) is not original here - Jaynes points out that many of the difficulties with Aristotelian reasoning involving the Law of Contradiction and Law of the Excluded Middle, a.k.a. Western dualism, like the ones that I'm ranting about in this book come about because one attempts to reason in the incorrect domain. To be more precise, he asserts that multivalued logical systems (systems with more than e.g. True exclusive or False, $A$ exclusive or $!A$, and so on) can always be reduced to two valued logic on a larger domain.

I'm not completely certain if I agree. If what he fundamentally means is that the truth values in e.g. a logical system based on true, false, unproveable, and contradictory (four possible values all of which appear like they'd be very useful in analyzing a Gödelian proposition) can be mapped into a two bit binary system (or equivalently that the base does not matter when you do arithmetic) then sure, I'd never argue with that. On the other hand, this alone does not make it a ``dualistic'' system as not true does not automatically imply false - it just means what it says, not (proven to be) true. Also, although there is still a dualistic ``partitioning'' possible - true and not true where not true includes false, unprovable and contradictory - the truth tables for any given partitioning seem like they'd be very different7.9.

Jaynes also points out (doubtless correctly, as he was a careful sort of guy) that most of the candidate non-Aristotelian (multiple-valued) systems of reasoning are internally inconsistent, embody logical ``fallacies'' (logicspeak for ``bad logic''), and even argues that predicate logic (the logic of human language involving categorical propositions) is rewritable using the ordinary expressions of algebra and mathematics and is nothing really different from a set theory analyzed with good old boolean/Aristotelian principles in disguise.

However, there are two fairly clear examples where it is not at all obvious that this is true, and both are well worth mentioning here. The first is known as Intuitionism. Intuitionism basically rejects the notion that ``$A$ is true'' means anything other than ``$A$ can be proven''. For some $A$, the fact that it cannot be proven does not mean that $!A$ is proven. In this logical/mathematical system, then, the Law of Excluded Middle is rejected as an axiom. Nevertheless, Intuitionism (as a form of mathematical constructionism) appears to be a consistent form of non-Aristotelian logic where one cannot use reductio ad absurdem - assuming a proposition $p$ is proven and showing that it leads to a contradiction, then concluding that $!p$ must be proven. It is very Gödelian in this respect - a proposition might well be true (or false) but unprovable.

In real human affairs this is not an empty point - it is, in fact, part of the basis of the United States Constitution and common law. Just because I (or ``society'' or ``the district attorney'') cannot prove that John did not rob the bank does not mean that I have proven that John did rob the bank.

Even if it cannot be proven by anybody - possibly because there were no witnesses, John left no clues, and John himself is dead, it doesn't seem to follow that John is guilty because his innocence cannot be proven. Consequently we require a direct proof that John is guilty, and feel more than a bit uncomfortable with ``circumstantial'' evidence of his guilt or statements like ``John must have done it because nobody else could have done it''. Does this really mean that this particular District Attorney cannot imagine how anyone else might have done it, or cannot prove that anybody else did it and wants to pick on John? Both are logical fallacies again, even in Aristotelian logic. Intuitionism carries the avoidance of such fallacies to the level of the rules of the logic itself so there can be no mistake - one has to separately prove or disprove $A$ independent from $!A$ respectively, and certain rules of inference are thereby formally altered.

There is a bit of an empirical cast to this example drawn from human affairs (instead of mathematics per se), which is fine with me as the primary theme of this book is Hume's (not Aristotle's) empirical statement of knowledge, that our sensory stream is all that we are knowing of the Universe and consequently Aristotle's actually rather absurd pronouncement as a basis for all knowledge that for a thing to be known it must be provable by reason is - wait for it - absurd7.10! It cannot even be used as a logical basis for concluding that we know nothing. As we shall see and beat half to death in the following chapters, although we cannot prove a single conclusion about the nature of that which we observe in our sensory stream using reason alone, we cannot deny that the sensory stream itself is known.

Jaynes acknowledges this in an indirect way as his entire first chapter is devoted to the notion of ``plausibility'' - basically an axiomatic development of the theory of probability to give us plausible grounds for concluding that John did rob the bank without the strict requirement of absolute logical rational proof. This too seems quite plausible to me - it describes the way real humans reason. However, it doesn't properly seem to appreciate the non-Aristotelian nature of Hume's empirical statement, and how it is a statement that describes an observation (not an argument) that any of us can make at any time, the fundamental truth of which cannot be doubted although it can never be rationally proven. To move beyond this requires axioms, and axioms cannot be proven rationally either.

Jayne's discussion of plausibility is a very good thing, because this entire work is devoted to showing that axioms are the basis of all that we think that we know in such a way that it cannot plausibly be doubted (although it cannot be proven as the proposition itself states). ``Plausibility'' seems to be a better description of human knowledge. We know nothing for certain about anything but our instantaneous existential sensory stream. This is true enough, but useless and (as Hume himself observed) nobody lives that way - we live instead as if the Universe that we imagine and infer is a plausible truth based on some set of axioms that are then more or less logically developed.

As The Matrix movie series so aptly and convincingly demonstrated, the price we pay for the logical certainty of mathematical and logical reasoning is that it cannot permit any certain conclusion to be mathematically or logically drawn about Reality. Our sensory stream could in principle not reflect an actual external Reality at all, or it could reflect an actual external Reality but be mistaken in every respect as to its true nature. Indeed a great deal of our creative energy as a culture is devoted to making up or experiencing Realities that are superimposed ``weakly'' on our minds through our sensory streams - dreams, interior monologues, fantasies, hallucinations, movies, music, books, theater, role playing games - all mental experiences that nevertheless communicate to us a sensory simulation of a Reality via various means that generate (externally or internally) sensory impressions ``like'' those that our presumed Reality itself generates.

The Matrix (or its literary predecessor The Joy Makers by James Gunn, or various other works by William Gibson and others) are science fiction stories that speculate that eventually (due to advances in technology) it will become possible to make the simulation sufficiently precise via direct neural stimulation directed by extremely powerful computers that humans will be unable to tell the difference between the ``fiction'' being presented to them electronically and ``reality''. At this point the layers of unknowable abstraction between reality as a sensory stream and sets of symbols that might be used as an objective basis for that stream become obvious - the experience of a thing does not imply its existence. This is not a purely theoretical argument, as the same thing happens all the time in our presumably real world. The use of certain psychoactive drugs, neurophysiological or neurochemical trauma, or plain old psychoses all can create significant deviations between what one experiences and an objective reality, sometimes transiently and sometimes permanently.

The second (and probably best known) example of a non-Aristotelian theory of reason (such as it is) is Alfred Korzybski's Science and Sanity and the umpty derived works by the collective of philosophers and thinkers who are members of the Institute of General Semantics (http://www.general-semantics.org). This Institute, founded by Korzybski, promulgates a semantic overview of reason that in certain important respects resembles the viewpoint being advanced in this work. It is touted as being a non-Aristotelian (and non-Newtonian and non-Euclidean) system of reasoning7.11.

There is a simple mantra for its primary axiom. ``The map is not the territory''. This means that the word for something is not the thing itself. Words are multivalent and categorical, things are unique; therefore ``whatever you say that a thing is, it is not''. Ouch! Seems like this approach has something to say about our efforts to chop up uncountably infinite sets like the real numbers into subsets (where the chopping can be done an infinite number of ways that cannot be specified by any compressed representation - one with less information in it than the points in the set).

However, General Semantics doesn't seem to go this way as information theory and random numbers aren't their thing. Also, this is pretty difficult to work through and this isn't a math text. To make this concrete and understandable, let us return to The Universe of Fruit (as a set theory).

If I wish to sort Fruit into boxes, apples in this one, oranges there, pineapples over there, I have to pick up a piece of fruit and make a decision about it. Unfortunately, every piece of fruit I pick up is unique! This piece that I pick up is a complex assortment of objects - sugar, starch, cellulose molecules, pigments in the skin, antioxidants, toxins, water, alcohol, and more - that are themselves complex assortments of objects - protons and neutrons and electrons, gluons and photons, quarks - in a complex and dynamic relationship that in the not distant past was definitely not a fruit and in the not distant future will definitely not be a fruit but for a brief interval in time has come together into what we call ``a fruit''. It has other coordinates and properties that contribute to this categorization - its origin, its size, its genetic encoding - many of which are examples of ``higher order'' structure yet. This piece of fruit is not only unique, every instant of its existence is independently unique.

So where do I get off calling it ``an apple'' and tossing it carelessly into the set-theoretic box at my feet? Even if it was an apple as it left my hands, quantum mechanics and thermodynamics conspire to more or less guarantee that it might not be an apple by the time that it lands, that it damn well won't be an apple by any standard at all after I eat it and excrete it, and that a year earlier my ``apple'' consisted of a myriad of elementary particle world lines that were gradually being carried from an initially random state into the highly organized and extremely transient state that is different from that of every other object on the planet that has ever been named apple.

Repeat ad nauseam (for every unique object ever given a name, since by your argument you aren't permitted to make general arguments because the things they apply to are dynamic and unique) and you too are now a Master of General Semantics!

Without disagreeing with a word of this (and in fact, duplicating some of the underlying reasoning here and there throughout this work for an entirely different purpose) it isn't, really, terribly relevant to logical systems and systems of knowledge. That is because my knowledge of the apple, unextended by the axioms of science and inference and inductive reasoning, unenlightened by language and categorization and analysis of structure, is appallingly shallow - it is limited to my instantaneous sensory stream in which sensory impressions that may or may not represent ``the apple'' occur. We are left with a profound paradox, not of the logical sort but the experiential and rational.

If I see the Universe free of all categories, with every single sensory impression in my sensory stream being unique and disconnected from any sort of memory of previous impressions, with no inferred logical relations between any two parts of the instantaneous stream or the memory of the stream at different times, then there is no reason at all. The term doesn't apply. Experiencing ``life'' in this manner is entirely passive, and reason leads one to no conclusions whatsoever. There is no language, as language has no point, no mathematics because mathematics is not a sensory impression, there is arguably no consciousness, because consciousness itself seems (in my own mind at least) to involve a complicated feedback process involving the immediate memory of the past. Really treating each moment as a unique and disconnected experience of sensory data is like trying to comprehend a movie as a huge pile of its individual frames, each cut out and flashed up on a screen in a random, disconnected order, while taking drugs that interfere with the formation of long term memory. Chaos is the only word for it.

If I on the other hand use language and symbology, if I create imperfect maps, if I invent sets and methodologies for sorting out objects into sets and deducing empirical relationships between the sets, if I admit time, and memory, and causality, and physics, and all the other forms of science, then I'm bound to make mistakes because I casually reason with imperfectly formed rules for specifying sets - I throw an orange in with the apples, literally or metaphorically, or what I thought were parseley greens turns out to be aconite7.12. It means that I'm bound to get all tied up in Gödelian knots when I try to reason about notions such as ``all men that don't shave themselves'' as this becomes a sort of a category error - the error of putting anything into a category and trying to reason about it, especially a self-referential one.

Now, I personally am not absolutely certain that General Semantics qualifies as an actual system of non-Aristotelian reasoning, but it claims to be one and there are well-reasoned analyses of the idea that in order to sort any sort of real objects (whatever those might be) properly into sets requires an ever growing set of axioms, with a whole set associated with each object that one adds to a given set. For example, I need an axiom that says something like ``This particular sensory impression that I'm having, that my presumed memory and understanding inform me is made up of a myriad of tiny whirling particles interacting with invisable potentials to create a wave-like ensemble that represents its collective `state', interacting with a still more unknown outside Universe that constantly makes small interactive changes with the ensemble, will at the moment be presumed to be `an apple' because it has the following inferred properties...''

Only nobody ever says that. They implicitly use broader axioms and live (or die) with the inconsistencies, if any, that result. So sure, it is all nicely pointed out. So what. It seems to miss the two main points - that it isn't all about identity of apples, it is about axioms, because without them we don't even have the ability to talk about not being able to call an apple an apple just because it is a unique apple and not exactly like any other apple. That ultimately all the non-Aristotelian systems of reason are just as linked to their axioms every bit as much as Aristotelian ones.

The issue isn't ``dualistic reasoning systems are always bad and multivalued ones are always good'' as dualistic logical reasoning systems work incredibly well once you've made the right axiomatic assumptions about the systems you're applying the logic to. It is ``you have to make a set of axiomatic assumptions as a necessary prior step to doing any sort of reasoning at all.'' Some of these axiomatic assumptions will lead to systems that work pretty well as a ``symbolic representation'' of the time-ordered, memory looped sensory impression that we call our ``awareness of external Reality''. Others will not. Some may be Aristotelian and work. Some may be Aristotelian and not work. Some may be non-Aristotelian and work or not work.

We are attempting the impossible, or asserting the impossible, or including in our language or symbology the inconceivable, and need to either restrict the range or domain of certain operations to exclude metaphorical division by zero or undefined inverse sines or the possibility of infinite non-convergent loops or reconsider the meaning of that which we are trying to compute, or prove, or discuss altogether. Perhaps the null set is just a symbolic referent that our Universe isn't well matched with our logical system and ``program''.

However, algebraically introducing these very simple operational definitions (not axioms) for a NaE or null set into a naive existential set theory very naturally eliminates all of the Cantor, Barber or Russell paradoxes, as the result of the operations proposed or requested is undefined, or NaE, or restricted away through closure - the paradoxes do not exist within the Universal set being considered. Precisely as happens in computer science, where the computer gives you and error message that says basically ``This is a null result, you idiot; fix your code and try again...'' OK, maybe it leaves off the ``you idiot'' part, but I always imagine it anyway when debugging my code.

Hopefully by now we have clarified a longstanding puzzle, or word-trap in the Laws of Thought. The ``state'' of non-being is not a state. The set of all $a$ that are not a set, where $a$ is permitted to be any object in the Universe $S$ including the empty set is not a set (even the empty set) in $S$, it is $\mu$. When the Laws of Thought refer to ``things'' not being, this is ``the set of all things that are not in the set algebra'' which is just as self-contradictory as $(A\
\&\ !A)$. With this in mind, let us reexamine the Laws of Thought first as they are written above, then as they are usually applied as the basis for Logic.

The Law of Identity is perfectly understandable as the first grouping of set definitions above. All objects in the set theory live in their own private ``identity set'' which is disjoint from the rest of the objects in the set Universe. The entire set theory is made up of the union of all of these objects. These are the objects that ``have being'', where the empty set is considered to be a set object that ``has being'' within the set Universe.

The Law of Contradiction, which ordinary logic treats as if it involves the empty set (where it would be a vacuous result, as the empty set cannot contain ``things''), is now seen to refer to the null set. In fact, it is the statement:

\begin{displaymath}
\forall a \in S: I_a \bigcap \mu = \mu
\end{displaymath} (7.9)

where as usual, $a$ can be the empty object or any non-empty object and $\mu$ is now interpreted as the ``set of things that do not exist in the set theory''. The empty set is in the set theory, where it performs the mundane task of defining the intersection of disjoint sets within the theory; $\mu$ is not in the theory.

The Law of Contradiction as stated is not a statement about applying the ``not'' operation to sets within the theory as if it is implicitly followed by a set descriptor. It is not (for the set of all fruit) that something must ``be an apple'' or ``not be an apple''7.13. It is (for the set of all fruit) something must be a fruit or not be (in the set). If our entire Universe is fruit, non-fruit is null, not empty, because we defined the empty set to be in the set. We cannot then talk about cars as if they are ``non-fruit'' so that it is possible for some ``thing'' to be not a fruit - not even the candy-apple red T-bird that I'll purchase with the vast profits from writing this book if they turn out to be vast enough - without implicitly creating a larger Universe for our set theory!

In mathematics this sort of thing is pretty obvious. In number theory, for example, Fermat's Last Theorem (one of my favorite propositions) states that $a^n + b^n = c^n$ has no solutions for integers $a>0,b>0,c>0$ for $n>2$. Of course it is trivial to find solutions for any $a>0,b>0,n>2$ if one relaxes the requirement that $a, b, c$ be integers. A whole different ballgame results if we let the numbers be negative (integer or not) and let $n$ be a real number - we are forced to conclude that solutions exist, but they might be complex. Yet another class of results follows if $n$ can be complex.

So when Fermat asserted that this equation has no solutions, that solutions do not exist, he meant that they do not exist within the Universe of positive integers. The existence (or non-existence) of solutions within a different set of numbers - the rational numbers, the real numbers, the complex numbers - is irrelevant. They are $\mu$.

This is a much cleaner formulation. Rather than stating that nothing can be and not be (leaving the exact meaning of ``something'', ``be'' and ``not be'' ambiguous and embedded in the dynamics of being, with its implicit past and future tenses), we instead create a static observation in set theory that the ``set of things that do not exist within the Universe $S$'' (the null set) has no intersection with the ``set of things within the Universe $S$'' including the empty set - the intersection is not just empty, it does not exist within the Universe $S$. The intersection of sets of imaginary unicorns with my box of apples isn't ``no apples'', it is ``you are out of your mind considering the imaginary unicorns to be a kind of fruit''. This is a stronger statement (and more accurate statement) of ``non-being'' than the usual Law of Contradiction.

Note that we ignore all lesser statements as being trivial tautologies that would never have even been written down in the first placea if one conjoined $\{\}$ to all sets or subsets algebraically at the beginning, so that the empty set is considered a ``subset'' of all sets of zero or more members. In that case $S = I_{\O} \bigcup I_a
\bigcup I_b...$ leads to both that Law of Contradiction and the Law of Excluded Middle as an absolutely trivial application of the usual definitions of intersection and union, but one is left with no way to deal with the inconceivable, with the self-contradictory, with $\mu$.

The Law of Excluded Middle is more troublesome to write as even an approximately algebraic result. The difficulty is that $\mu$ has no members (in the set Universe) and isn't even an empty set there. So talking about ``everything'' being either within the Universe $S$ or not within the Universe $S$ only makes sense if there is a bigger Universe $U$ in which $S$ is embedded, $S \subset U$, where an object that (such as a candy-apple red T-Bird) exists but isn't in $S$, the Universe of Fruit might be. This is, in fact, the way the Law of Excluded Middle is usually applied - something must either be a fruit or not a fruit (in which case it must be something else). However, Russell paradox sets cannot be in such a bigger Universe. The set of all sets that do not contain themselves in $S$ doesn't exist in some larger $U$, it isn't the empty set (which contains itself utterly trivially), it is self-contradictory and doesn't exist. It is not a set, in any Universe.

Instead let me define the Law of Excluded Middle backwards. Things that exist (that is to say, everything) are not in $\mu$. Since they exist, they must be in $S$ (by hypothesis, our entire Universe):

\begin{displaymath}
{\rm if\ } a\ !\in \mu, {\rm\ then\ } a \in S
\end{displaymath} (7.10)

We see that Excluded Middle is basically the existential constraint on the set Universe. This is the statement that effectively eliminates all possibility of paradox, explicitly, by construction!

Thus we eliminate imaginary unicorns and red T-birds from the World of Fruit, integer numbers from the Universe of Pocket Change, complex numbers from the real line. We also eliminate the Universal Contradiction from set relationships - the metaphorical or real division by zero, the $(A\
\&\ !A)$, the inverse sine of 2, the set of all sets that do not include themselves, the set of all Universes where a male barber shaves all males that do not shave themselves - the inconceivable - these exist within no Universes7.14.

Now, how does one get from set theory as the Mother of all Math to real mathematics, to physics, to the good stuff? At this point we (as a species, as represented by its brightest scientists and mathematicians) have a pretty good idea how to proceed at least for mathematics and physics and the other sciences. I'll devote considerable space to discussing this in detail in later chapters and you'll have to just keep reading and trust that I'll get to it eventually.


next up previous contents
Next: The Pit of Existential Up: Paradoxes Previous: Paradoxes   Contents
Robert G. Brown 2007-12-17