Title:
Some-thing from No-thing: G. Spencer-brown’s Laws of Form.
Abstract:
G. Spencer-Brown’s Laws of Form is summarized and the philosophical
implications examined. Laws of Form is a mathematical system which deals
with the emergence of anything out of the void. It traces how a single
distinction in a void leads to the creation of space, where space is
considered at its most primitive, without dimension. This in turn leads to two
seemingly self-evident “laws”. With those laws taken as axioms, first an
arithmetic is developed, then an algebra based on the arithmetic. The algebra
is formally equivalent to Boolean algebra, though it satisfies all 2-valued
systems. By following the implications of the algebra to its logical
conclusions, self-reference emerges within the system in the guise of re-entry
into the system. Spencer-Brown interprets this re-entry as creating time in
much the same way in which distinction created space. Finally the paper
considers the question of self-reference as seen in Francisco Varela’s Principles
of Biological Autonomy, which extended Spencer-Brown’s Laws of Form
to a 3-valued system.
SOME-THING FROM NO-THING:
G. SPENCER-BROWN’S LAWS OF FORM
The knowledge of the ancients was perfect. How so? At
first, they did not yet know there were things. That is the most perfect
knowledge; nothing can be added. Next, they knew that there were things, but
they did not yet make distinctions between them. Next they made distinctions,
but they did not yet pass judgements on them. But when the judgements were
passed, the Whole was destroyed. With the destruction of the Whole, individual
bias arose.
-
Chuang Tzu.
Anyone who thinks deeply enough
about anything eventually comes to wonder about nothingness, and how something
(literally some-thing) ever emerges from nothing (no-thing). A mathematician,
G. Spencer-Brown (the G is for George) made a remarkable attempt to deal with
this question with the publication of Laws of Form in 1969. He showed
how the mere act of making a distinction creates space, then developed two
“laws” that emerge ineluctably from the creation of space. Further, by
following the implications of his system to their logical conclusion
Spencer-Brown demonstrated how not only space, but time also emerges out of the
undifferentiated world that precedes distinctions. I propose that
Spencer-Brown’s distinctions create the most elementary forms from which
anything arises out of the void, most specifically how consciousness emerges.
In this paper I will introduce his ideas in order to explore the archetypal
foundations of consciousness. I’ll gradually unfold his discoveries by first
outlining some of the history of ideas that lie behind them.
George
Boole’s Laws of Thought
Pure mathematics was discovered by Boole in a work
which he called The Laws of Thought.
-
Bertrand Russell.
In the 1950's Spencer-Brown left the
safe confines of his duties as a mathematician and logician at Cambridge and
Oxford to work for an engineering firm that specialized in electronic circuit
networks, including those necessary to support the British railways system.
Networks are composed of a series of branching possibilities: left or right,
this way or that way. At each junction, a choice must be made between several
possibilities. From a mathematical perspective, a choice between multiple
branches can be reduced to a series of choices between only two possibilities.
Thus network design involved virtually identical problems with logic, where one
constructs complex combinations of propositions, each of which can be either
true or false. Because of this, the firm hoped to find in Spencer-Brown a
logician who could help them design better networks. Spencer-Brown in turn
tried to apply a branch of mathematics known as Boolean algebra to their
problem, initially to little avail, as we will see. Before we present
Spencer-Brown’s ideas, we need to know a little about the first attempt by
mathematics to deal with the problems of opposites in the mind: Boolean
Algebra.
By the mid-19th century, mathematics
was undergoing a sea-change. Where previously mathematics had been considered
the “science of magnitude or number”, mathematicians were coming to realize
that their true domain was symbol manipulation, regardless of whether those
symbols might represent numbers. In 1854, the English educator and
mathematician George Boole [1815–1864] produced the first major formal system
embodying this new view of mathematics, an astonishing work: Laws of Thought.
His ambitious purpose was no less than capturing the actual mechanics of the
human mind. In Boole’s words: “The design of the following treatise is to
investigate the fundamental laws of those operations of the mind by which
reasoning is performed; to give expression to them in the symbolical language
of a Calculus, and upon this foundation to establish the science of Logic and
construct its method” (Boole, 1854/1958, p. 1).
With some degree of hyperbole,
philosopher and logician Bertrand Russell once said that “pure mathematics was
discovered by Boole in a work which he called The Laws of Thought”
(Boyer, 1985. p. 634). In contrast, Boole was not only ambitious, but
realistic; even in the throes of his creation, he understood that there was
more to mathematics than logic, and certainly more to the mind than logic. In a
pamphlet Boole’s wife wrote about her husband’s method, she said that he told
her that when he was 17, he had a flash of insight where he realized that we
not only acquire knowledge from sensory observation but also from “the
unconscious” (Bell, 1965, pp. 446-7). In this discrimination, Boole was
amazingly modern. He was intuiting a new approach to explore the fundamental
nature of archetypal reality at its most basic level. G. Spencer-Brown was to
bring that new approach to fruition.
Algebra
vs. Arithmetic
To find the arithmetic of the algebra of logic, as it
is called, is to find the constant of which the algebra is an exposition of the
variables—no more, no less. Not just to find the constant, because that would
be, in terms of arithmetic of numbers, only to find the number. But to find how
they combine, and how they relate—and that is the arithmetic.
-
G. Spencer-Brown (1973).
Spencer-Brown quickly discovered
that the complexity of real world problems far exceeded those he had studied in
an academic setting. He started out using traditional Boolean algebra, but
found he needed tools not available in Boolean algebra. In essence he needed an
arithmetic, which was a problem as Boolean algebra was commonly considered the
only algebra that doesn’t have an arithmetic. Now what is the difference
between arithmetic and algebra? Put most simply, arithmetic deals with
constants (the familiar numbers 1, 2, 3,…for the arithmetic we all grew up
learning to use), while algebra deals with variables. Again, if you cast your
mind back to the algebra you may have taken in junior high school, high school
or college, variables are simply symbols which can stand for unknown constants.
That is, an X or a Y or a Z might represent any number at all in an equation.
Boole had formed his logical algebra
by close analogy to the normal algebra of numbers, using the normal symbols for
addition, subtraction and multiplication, but giving them special meanings for
logical relationships. In his “algebra”, the equivalent of numbers were simply
the two conditions: “true”and “false”. Just as the solution to an equation in
normal algebra is a number, the solution to an equation in Boolean algebra is
either “true” or “false”.
Boole’s concept of making his
algebra almost exactly parallel to numerical algebra (in the symbolic form that
it was normally presented), made it easier for later mathematicians to
understand and accept (though, as is unfortunately all too usual, that had to
await his death.) But the symbol system most usual for numeric algebra isn’t
necessarily the best for logical algebra. In practice, complicated logical
statements lead to complicated Boolean equations which are difficult to
disentangle in order to determine whether or not they are true. And the absence
of an arithmetic underlying the algebra meant that one could never drop down
into arithmetic to solve a complex algebraic problem.
Since computers and other networks
deal with just such binary situations—yes or no, left or right, up or down—it
was natural to look to Boolean algebra for answers for network problems. But
because Boolean algebra had developed without an underlying arithmetic, it was
exceptionally difficult to find ways to deal with the problems.
Spencer-Brown was forced backwards
into developing an arithmetic for Boolean algebra simply to have better tools
with which to work. As with so many of the hardest problems encountered in
mathematics, what he really needed was an easily manipulable symbol system for
formulating problems. Mathematicians had grown so used to Boole’s system, which
was developed as a variation on the normal algebra of numbers, that it never
occurred to them than a more elegant symbolism might be possible. What
Spencer-Brown finally developed, after much experimentation over time, is
seemingly the most basic symbol system possible, involving only the void and a
distinction in the void.
The
Emergence of Some-thing from No-thing
Nothing is the same as fullness. In the endless state
fullness is the same as emptiness. The Nothing is both empty and full. One may
just as well state some other thing about the Nothing, namely that it is white
or that it is black or that is exists or that it exists not. That which is
endless and eternal has no qualities, because it has all qualities.
C.
G. Jung (1920/1983).
Try to imagine nothingness. Perhaps
you envision a great white expanse. But then you have to take away the quality
of white. Or perhaps you think of the vacuum of space. But first you have to
take away space itself. Whatever the void is, it has no definition, no
differentiation, no distinction. When all is the same, when all is one, there
is no-thing, nothing. Paradoxically, in Jung’s words: “nothing is the same as fullness.”
Now make a mark, a distinction,
within this void. As soon as that happens, there is a polarity. Where before
there was only a void, a no-thing, now there is the distinction (the mark) and
that which is not the distinction. Now we can speak of “nothing” as some-thing,
since it is defined by being other than the distinction.
Don’t throw up your hands in despair
at trying to understand the abstract nature of all this. Let’s bring it down to
earth with an example. For our void, our nothingness, imagine a flat sheet of
paper. Let’s imagine that it has no edges, that it keeps extending forever. In
mathematics this is called the plane. Of course, this infinitely extended piece
of paper isn’t really nothing, but it is undifferentiated—every part of it is
the same as every other part. So it can at least be a representation of
nothing. Now draw a circle in it, as below. You’ll have to imagine also that
this circle has no thickness at all. It simply separates two different states,
which we would normally think of as “inside” and “outside.” Following
Spencer-Brown’s terminology, we’ll call this the “first distinction.”
Where
before there was no-thing, drawing the circle creates two things: an inside and
an outside (of course, we could just as readily call the outside the inside and
vice versa. The names are arbitrary.) Let that which is enclosed be considered
the distinction, the mark, and what is outside “not the mark” (remember, the
circle has no thickness whatsoever.) Now, of course, any distinction whatsoever
would do. Any difference one could make which would divide a unitary world into
two things would be a proper distinction. Freudians like to point to an infant’s
discovery that the breast is separate from itself as the first distinction that
leads to consciousness. For many early cultures, the first mythological
distinction was the separation of land and sea, or light from darkness. In
Jungian work, one first draws a circle, a mandala in potentia, into
which one projects emerging distinctions in one’s personality and
consciousness. But there are infinitely many distinctions possible within the
world.
Now
let us flesh out this space we have created, discover its laws. Start by
drawing a second circle beside the one we’ve already drawn. Imagine you are
blind and wandering around the plane represented above. You bump up against one
of the circles and pass inside. After wandering around inside a while, you come
up against the edge of the circle again and pass outside. Wandering some more,
you encounter the edge of the second circle and again pass inside, then later
outside. Is there any way you could possibly know that there were two circles,
not one? How could you know whether you had gone into one of the circles twice
or into both circles once? All you could know was that you had encountered what
you regarded as an inside and an outside. Hence for all practical purposes, two
distinctions (or three, or a million) of the same nature are
the same as one. Nothing (remember literally no-thing)
has been added. Spencer-Brown calls this the law of condensation; i.e.,
multiple distinctions of the same sort simply condense into a single
distinction.
Are there any other laws we have to
find about this strange two-state space? Bear with me, there is only one other
situation to consider. Let’s go back to our original circle, the “first
distinction.” Let us draw a second circle, but this time draw it around the
first, creating nested circles.
Once more imagine you are blind, wandering around the
plane. You encounter the edge of a circle and pass within, thus distinguishing
what you consider to be inside and outside. Once inside, you wander some more,
then again you encounter the edge of a circle and pass outside. Or did you?
Perhaps the edge you encountered was the edge of the inner circle and you
passed within it. You are not able to distinguish between the inside of the
inner circle and the outside of both circles. (I hate to keep reminding you
that our circle has no thickness at all, it merely divides the world into two
states.) In such a world, two insides make one outside.
Let’s
assume that the outer circle stretches farther and farther away from the inner
circle until you are no longer aware that it even exists. As far as you are
concerned there is only the single circle through which you pass inside or
outside. But a godlike observer who could see the whole plane would realize
that when you passed inside the inner circle, you were actually reentering the
space outside the outer circle. It all depends on how privileged your
perspective. Nested distinctions erase distinction. Spencer-Brown refers to
this principle as the law of cancellation.
These two laws govern all two-valued
worlds. We recognize that the tension between conscious and unconscious is as
old as life itself. Even the simplest one-celled creature has to distinguish
between food, which it wants to eat, and danger, from which it needs to flee.
It is forced to make a Spencer-Brown distinction, to take one or the other of
two paths. Life began by first developing the skill to make distinctions, to
create boundaries, at the molecular level. Evolution progresses by making ever
more complex distinctions until the emergence of consciousness itself. From the
extension of Spencer-Brown’s perspective that we are presenting here, we could
say that consciousness itself is the progressive emergence of a
self-reflective, recursive cycle of ever more subtle distinctions. Mathematician
Norbert Wiener invented the term “cybernetics” to investigate the
self-reflective, informational dynamics of such distinctions. And consciousness
emerges ineluctably from the process of making distinctions.
Laws
of Form
Although all forms, and thus all universes, are
possible, and any particular form is mutable, it becomes evident that the laws
relating such forms are the same in any universe.
-
G. Spencer-Brown, 1979, p. xxix).
These two laws are the only ones
possible within the space created by a distinction. No matter how many
distinctions we choose to make, they simply become combinations of paired or
nested distinctions.
These almost transparently obvious laws are all that Spencer-Brown
needed to develop first his full arithmetic, then his algebra. In proper
mathematical form, they are presented as axioms from which all else will be
derived, but there is something unique going on here. In formal mathematical
system axioms are not themselves open to examination. Axioms are considered
primitive assumptions beyond questions of true or falsity. The remainder of a
system is then developed formally from these primitives. In contrast,
Spencer-Brown’s axioms seem to be indisputable conclusions about the deepest
archetypal nature of reality. They formally express the little we can say about
something and nothing.
This is one of several reasons why
Spencer-Brown’s Laws of Form has been either reviled or worshiped.
Mathematicians are deeply suspicious of any attempt to assert that axioms might
actually be assertions about reality, and with good reason. For over two
thousand years, the greatest minds believed that Euclid’s geometry was not only
a logically complete system, but one that could be checked by reference to
physical reality itself. Only with the development of non-Euclidian geometries
in the nineteenth century did it become apparent that Euclid’s axioms might be
merely arbitrary assumptions, and that a different set of assumptions could
lead to an equally complete and consistent geometry.
Once bitten, twice
shy—mathematicians became much more concerned with abstraction and formality.
They separated what they knew in their mathematical world from what scientists
asserted about the physical world. Mathematics was supposed to be the science
which dealt with the formal rules for manipulating meaningless signs.
Spencer-Brown’s attempt to develop axioms that asserted something important
about reality definitely went against the grain of modern mathematics.
The
Dynamics of Spencer-Brown’s Archetypal Distinctions
Let’s consider the elegant symbol system Spencer-Brown
used to express and manipulate distinctions. Instead of our example of a circle
in a plane, let a mark symbolized by the top and bottom of a square represent
distinction:
Our
two laws then become:
and
Using only those two laws, the most
complex combinations of marks can be reduced either to a mark or to no mark.
Try yourself to use the two laws to reduce this example to either a mark or to
nothing (hint: it should end up as a mark.):
These two laws are the full and complete set of rules
for Spencer-Brown’s arithmetic. As we have already stressed, it’s a very
strange arithmetic in which the constants, comparable to 1, 2, 3, . . . in
normal arithmetic are simply the mark, and the non-mark.
Though
any combination of marks, no matter how complex, can be reduced using this
simple arithmetic, Spencer-Brown found it useful to extend the arithmetic to an
algebra by allowing variables; i.e., alphabetic characters that stand for combinations
of marks. For example, the letters p or q or r might each stand for some
complex combination of marks. He then developed theorems involving combinations
of marks and variables which would be true no matter what the variable might
be. Since his whole point was to develop the arithmetic which underlay Boolean
algebra, of course the algebra he developed was equivalent to Boolean algebra.
But, as he points out, the great advantage is that since his arithmetic was
totally indifferent to what two-valued system it was applied to, the resulting
algebra is equally indifferent to its application. It can certainly be
interpreted as a Boolean algebra, but it can equally well be interpreted as an
algebra of network design, or any other two-valued system, a point which has
been either ignored or dismissed by critics.
Self-Reference,
Imaginary Numbers, and Time
Space is what would be if there could be a
distinction. Time is what would be if there could be oscillation.
-
G. Spencer-Brown (1973).
Spencer-Brown’s Laws of Form are an
examination of what happens when a distinction is made, when something emerges
from the unconscious into consciousness. Hopefully, the first of
Spencer-Brown’s two rather oracular statements above now makes sense. We have
seen how space emerges from the mere fact of making a distinction.
Neuro-biologist and cybernetics expert Francisco Varela has called the latter,
the creation of time, “in my opinion,
one of his most outstanding contributions” (1979, p. 138). Let’s see if we can
bring equal sense to it.
In solving many of the complex
network problems, Spencer-Brown (and his brother, who worked with him) used a
further mathematical trick which he avoided mentioning to his superiors, since
he couldn’t then justify its use. He had been working with his new techniques
for over six years and was in the process of writing the book that became Laws
of Form when it finally hit him that he had made use of the equivalent of
imaginary numbers within his system.
Imaginary numbers evolved in mathematics
because mathematicians kept running into equations where the only solution
involved something seemingly impossible: the square root of -1 (symbolized by
sqrt -1.) If you will recall from your school days, squaring a number simply
means multiplying it by itself. Taking the square root means the opposite. For
example, the square of 5 is 25; inversely the square root of 25 is 5. But we’ve
ignored whether a number is positive or negative. Multiplying a positive number
times a positive yields a positive number; but also multiplying a negative
number times a negative number also yields a positive number. So the square
root of 25 might be either +5 or -5. But what then could the square root of a
negative number mean?
This was so puzzling to
mathematicians that they simply pretended such a thing could not happen. This
wasn’t the first time they had done this. Initially negative numbers were
viewed with the same uneasiness. The same thing happened with irrational
numbers such as the square root of 2 (an irrational number cannot be expressed
as the ratio of two integers). Finally, in the 16th century, an
Italian mathematician named Cardan had the temerity to use the square root of a
negative number as a solution for an equation. He quickly excused himself by
saying that, of course, such numbers could only be “imaginary.” The name stuck
as more and more mathematicians found the technique useful, and the symbol
forsqrt -1 became i (short for imaginary).
Spencer-Brown had come up with an equivalent situation in solving
network problems. Instead of the square root of a negative number, he found
equations where a variable was forced to refer to itself, like E2 and E3 below:
Remember that f has to stand for
some combination of marks that ultimately reduces to either a mark or no mark. There is no problem with the first
equation, where it works equally well whether we substitute a mark or no mark.
But in the second equation, if we assume that f3 = the mark ,
then f3 = no mark.
Similarly, if f3 = no mark , then f3 = the mark. That is, if the value of the
function is a mark, then it’s not a mark; if the value is not a mark, then it
is a mark. Just as with imaginary numbers, we are dealing with an
impossibility, in this case caused by self-reference.
Spencer-Brown simply made use of these impossible
numbers in his calculations without understanding what they meant. With the
realization that these were equivalent to imaginary numbers, he not only
understood what they represented, but had an insight to how imaginary numbers
could be interpreted as well: both imaginary numbers and his self-referential
functions were “oscillations” in and out of the normal system. Let’s pause and
make that very clear. In the system created by Spencer-Brown’s Laws of Form,
there are only two possible solutions to an equation: the mark and no-mark. Yet
these self-referential equations have a 3rd solution, one that
oscillates in and out of the system: first the solution is the mark, then it’s
not the mark, and so forth endlessly. Since this solution cannot be found
within the space created by the system, it has to be a movement in time.
Just as the space created by Laws of
Form has no dimensions, neither does the time created by it. You can’t refer to
it in seconds or minutes; it is more primitive than that. This concept of
dimensionless time as a resolution for problems of self-reference has become a
commonplace through the wide use of computers. Computer programmers use the
term “iteration” to describe the movement of a program from one state to
another. For example, computer programs commonly count the number of times a
sub-routine has run by adding an instruction like “n = n + 1”, then checking
the value of “n” to see if the sub-routine has run enough times. It is
understood that the “n” on the left side of the equation is a later stage than
the “n” on the right side. Time has entered the picture. But note that this
time is dimensionless. We can’t say that one “n” is a day or an hour or a
minute or a second later than the other “n”; all we know is that one state of
“n” is later than the other state. This is analogous to how we created a space
without dimension by the simple act of making a distinction.
Spencer-Brown realized that his
simple but puzzling little equation brought time at its simplest manifestation
into the timeless world of his Laws of Form. Such equations simply “oscillate”
between one value and another, just as imaginary numbers provide the
possibility of oscillating between values that lie first on the real number line,
than off it, then on it again, and so forth.
Where
to Go Next
Paradox, however, lies beyond opinion. Unfortunately,
orthodox attempts to establish the orthodoxy of the orthodox result in paradox,
and, conversely, the appearance of paradox within the orthodox puts an end to
the orthodoxy of the orthodox. In other words, paradox is the apostle of
sedition in the kingdom of the orthodox.
Richard
Herbert Howe and Heinz von Foerster (1975, pp. 1-3).
The most logical way to advance past
Laws of Form is to start where it ends: with self-reference. Spencer-Brown
wisely finished his work at the point when self-reference entered the picture,
satisfied with the deep insight that self-reference introduces time. He left it
for others—non-mathematicians perhaps?—to think about the implications of what
happens when his timeless, dimensionless calculus enters the world of space,
time, and dimension in which we actually live.
A decade after the original
publication of Laws of Form in 1969, Francisco Varela’s Principles of
Biological Autonomy (1979) extended Spencer-Brown’s work from a 2-value
system to a 3-valued one in which self-reference joins the mark and the
not-mark as the three primary entities that constitute all reality. Varela was
attempting to find the simplest possible way to symbolize a reality which
explicitly includes self-reference, since self-reference, in his words “is the
nerve of the kind of dynamics we have been considering in living systems and
autopoieses.” It’s important to realize that, while this extension provides a
way to extend Spencer-Brown’s calculus into biological systems, it in no way
resolves the paradoxical issues raised by the fact that a system as simple as
that in Laws of Form led inevitably to issues of self-reference that are undefinable
within the system. Rather Varela admits self-reference as a distinction as
valid as the primary distinction Spencer-Brown made, thus accepting it as part
of physical reality without questioning what that means. This is not a failure
to understand the issue presented by the appearance of time within
Spencer-Brown’s calculus, it is instead an explicit creation of a new calculus
in which self-reference will be the core.
A deeper understanding of
self-reference is necessary to escape from logical conundrums of the sort that
appeared when self-reference necessarily began to poke its head into science
and mathematics in the late nineteenth and early twentieth centuries. Varela
comments that:
it is, I suspect, only in a nineteenth-century social
science that the abstraction of the dialectics of opposites could have been
established. This also applies to the observer’s properties.…There is mutual
reflection between describer and description. But here again we have been used
to taking these terms as opposites: observer/observed, subject/object as
Hegelian pairs. From my point of view, these poles are not effectively opposed,
but moments of a larger unity that sits on a metalevel with respect to both
terms. (1979, p. 101).
Hegel’s version of the “dialectics
of opposites” was organic. First there was a thesis, which necessarily called
into existence its antithesis. Out of the interplay between thesis and
antithesis over time, ineluctably emerged a new synthesis of both. Then the
cycle would repeat with the emergent synthesis as a new thesis, which created a
new antithesis, and so on ad infinitum. The essentially nineteenth century
slant of the dialectic was the emphasis on the organic evolution over time.
After Darwin, time could never again be ignored in considering such issues. But
note that effectively for Hegel, thesis and antithesis are related in a
self-referential loop, from which eventually a new synthesis emerges. It was
simply a little too early in Hegel’s time for the mathematics to emerge.
Let me just give one final extended
quote from Varela on this issue of self-reference, in this case under the
seemingly less fearful physical term of feedback. He comments that:
When [Norbert] Wiener brought the feedback idea to the
foreground, not only did it become immediately recognized as a fundamental
concept, but it also raised major philosophical questions as to the validity of
the cause-effect doctrine.…the nature of feedback is that it gives a mechanism,
which is independent of particular properties, of components, for constituting
a stable unit. And from this mechanism, the appearance of stability gives a
rationale to the observed purposive behavior of systems and a possibility of
understanding teleology.…Since Wiener, the analysis of various types of systems
has borne this same generalization: Whenever a whole is identified, its
interactions turn out to be circularly interconnected, and cannot be taken as
linear cause-effect relationships if one is not to lose the system’s
characteristics (1979, pp. 166-167).
There are several important
realizations within that statement. “Feedback…gives a mechanism, which is
independent of particular properties, of components, for constituting a stable
unit.” And consider the follow-up statement that “the appearance of stability
gives a rationale to the observed purposive behavior of systems and a
possibility of understanding teleology.” In other words, cause-and-effect
is perhaps an overly crude description of any reality that involves feedback.
Feedback enables systems to preserve a personal integrity over time, despite a
widely varying set of outer circumstances. Once that self-referential
definition of a system is in place, the system is both necessarily purposeful,
and its evolution can be considered from a teleological, as well as a causal
viewpoint, since the definition of identity is more significant than the
causal factors within which it functions.
So we find that whenever we attempt
to describe sufficiently complex closed systems, self-reference is necessary in
order to explain how those systems remain closed. On the other side of the
coin, chaos theory also emerges when sufficiently complex, self-referential
open systems are considered. Self-reference is the common denominator that
underlies both organic closure and change through the stages of chaos.
Therefore, it’s easy to understand
why first Spencer-Brown, then Varela, wanted to isolate what distinguished
self-reference at its most basic. Though Spencer-Brown was dealing with one of
the purest (perhaps the single purest) mathematical systems ever developed, its
development led him inevitably to self-reference, and that led him to the
question of the relationship between form and time. These issues, introduced by
Spencer-Brown, extended by Varela, remain as central and, unfortunately, as
ignored or misunderstood as when Laws of Form was first published. It is
always difficult to interest the orthodoxy in questions that end in paradox.
[In order to advance further in
dealing with the self-referential issues presented by Laws of Form, we
have to turn to the work of mathematician Louis Kauffman, whose collaboration
with Varela on “Form Dynamics (Kauffman and Varela, 1980), formed the core of
[the discussion of wave forms in] chapter 12 of Principles of Biological Autonomy,
the chapter in which the mathematics of the 3-valued logic was presented.
Kauffman’s own work adds significantly to Spencer-Brown’s original work as it
moves back-and-forth between the place of “linguistic singularity”, as he terms
the world of Spencer-Brown distinctions
(Kauffman, 1998) and the outside world in which such self-referential
issues do not collapse into singularities. This work builds on his concept of
the “indicative shift” and culminates in his collaboration with James M. Flagg
on what Kauffman refers to as the “Flagg Resolution”. Kauffman has presented
this work to the readers on Cybernetics and Human Knowing in his column
“Virtual Logic.” I hope in the near future to complement this presentation with
a paper similar in format to the current paper on Laws of Form.]
References
Bell, E.T. (1965). Men of Mathematics. New
York: Simon and Schuster.
Boole, G. (1958). An Investigation of the Laws of
Thought: On Which are Founded the Mathematical Theories of Logic and
Probabilities. New York: Dover. (Original work published 1854).
Boyer, C. B. (1985). A History of Mathematics.
Princeton: Princeton University Press.
Howe, R. H., & von Foerster, H. (1975).
Introductory Comments to Francisco Varela’s Calculus for Self-Reference. International
Journal of General Systems, 2, 1-3.
Jung. C. G. (1982). VII Sermones ad Moruos. (S. A.
Hoeller, Trans.). In S. A. Hoeller, The Gnostic Jung and the Seven Sermons
to the Dead (pp. 44-58). Wheaton, Illinois: Theosophical Publishing House.
(Original work published without copyright or date, approximately 1920).
Kauffman,
L. H. (1998). Virtual Logic, Part 5. Cybernetics and Human Knowing, 5-1.
Kauffman, L. H. Varela, F. (1980). Form dynamics. Journal of Social and Biological
Structures, 171-206.
Spencer-Brown, G. (1973). Conference on Laws of
Form (Cassette recordings #F23-1:8). Olema, CA: MEA. Conference held at
Esalen Institute.
Spencer-Brown, G.. (1979). Laws of Form (rev.
ed.). New York: E. P. Dutton.
Varela, F. J. (1979). Principles of Biological Autonomy. New York: North Holland.