Science and the STOMPS principle

Table of Contents Part One Part Two Part Three Part Four Part Five Part Six Part Seven
Part Eight Part Nine Part Ten Part Eleven Part Twelve Part Thirteen Part Fourteen Conclusion



Top Left: A typical prokaryotic cell. Although a bacterium is a simple life form, its design is far more ingenious than anything that human scientists could possibly create. Image courtesy of Mariana Ruiz Villarreal ("Lady of Hats") and Wikipedia.
Bottom left: Schematic of a typical eukaryotic animal cell, showing subcellular components. Organelles: (1) nucleolus (2) nucleus (3) ribosome (4) vesicle (5) rough endoplasmic reticulum (ER) (6) Golgi apparatus (7) Cytoskeleton (8) smooth ER (9) mitochondria (10) vacuole (11) cytoplasm (12) lysosome (13) centrioles. Image courtesy of MesserWoland, Szczepan1990 and Wikipedia.
Right: Shanghai, the world's largest city proper. As we'll see below, its complexity is dwarfed by that of the eukaryotic cells found in plants, animals, fungi, slime moulds, protozoa and algae. Image courtesy of Dawvon and Wikipedia.


Highlights:
  • Intelligent Design supporters have often been accused by Professor Richard Dawkins of appealing to something called an API: an Argument from Personal Incredulity. The reasoning is supposed to go like this: I cannot imagine how complex structure X could have come about as a result of blind natural processes; therefore an intelligent being must have created it. In this post, I'd like to explain why Dawkins has got us pegged wrong.
  • My own conversion on something to Intelligent Design was based on something which I have decided to call the STOMPS Principle. STOMPS is an acronym for: Smarter Than Our Most Promising Scientists. The reasoning goes like this: if I observe a complex system which is capable of performing a task in a manner which is more ingenious than anything our best and most promising scientists could have ever designed, then it would be rational for me to assume that the system in question was also designed.
  • The eukaryotic cell is a perfect illustration of the STOMPS principle. Plants, animals, fungi, slime moulds, protozoa and algae are all made up of eukaryotic cells. A eukaryotic cell is built like a city out of a science fiction novel. For sheer ingenuity, the technology it employs is light years ahead of anything our best scientists can devise. I conclude that it was probably designed.
  • Atheists have a comeback to this line of argument: they cite Leslie Orgel's famous statement that "Evolution is smarter than you are." Genetic algorithms are said to be capable of performing feats that intelligent human beings could never accomplish, over the course of time. In reply, I argue that while genetic algorithms are good at optimizing existing functions, they are not so good at creating new ones - especially when the transformation from the system's current state to a state where it could perform the desired function would require multiple changes to occur in parallel, or where the system changes would require the co-ordination of multiple parts, or where the changes would have to occur within a limited amount of time (think: whale evolution). Changes like these can be accomplished far more easily through intelligent engineering than as a result of blind processes.
  • In this post, I describe how my own conversion to Intelligent Design was prompted by an article on the astonishing complexity of DNA, which convinced me that it had been engineered by an Intelligent Designer.
  • Atheists such as Professor Richard Dawkins are apt to counter Design arguments with the retort, "Who designed the designer?" They then argue that this Designer would have to be an even more comlplex entity than the universe He designed. In response, I shall argue that it is not always true that explanations have to be simpler than the phenomena they are invoked to explain. Designers are generally more complex than the phenomena they are invoked to explain: the carvers of Mount Rushmore were far more complex than the carvings they created, yet that does not prevent a design explanation from being a perfectly good explanation of some complex phenomena that we find in the world.
  • Finally, I put forward a simple proposal for modifying the principle of methodological naturalism: scientists should assume by default that natural events have a law-governed natural explanation, but scientists should also assume by default that a complex system exhibiting features whose ingenuity surpasses anything that a team of intelligent human scientists could have created, was produced by a superhuman intelligent agent.

ID supporters: smart or unreasonably incredulous? You decide.

ID supporters are often accused of appealing to something called an API: an Argument from Personal Incredulity. The acronym comes from Professor Richard Dawkins. The reasoning is supposed to go like this: I cannot imagine how complex structure X could have come about as a result of blind natural processes; therefore an intelligent being must have created it. This, Dawkins rightly points out, is not a rational argument. Certainly it has no place in a science classroom.

But my own conversion to Intelligent Design was not based on an API, but on something which I have decided to call the STOMPS Principle. STOMPS is an acronym for: Smarter Than Our Most Promising Scientists. The reasoning goes like this: if I observe a complex system which is capable of performing a task in a manner which is more ingenious than anything our best and most promising scientists could have ever designed, then it would be rational for me to assume that the system in question was also designed. That is not to say that nothing will shake my conviction, but if you claim otherwise, then I am going to set the bar quite high. If someone claims that a blind natural process could have done the job, then I am certainly going to demand a detailed, step-by-step account as to exactly how the process in question could have accomplished this stupendous feat. I won't be satisfied with mere "conceivability arguments" ("For all we know, it might have happened like this"). And if someone invokes long periods of time as an explanation ("In the long run, anything is possible"), I shall demand to see mathematical calculations showing that in the time available, the emergence of the system was reasonably probable (i.e. greater than 10^-120). Vague, hand-waving appeals to natural processes which are light on detail won't impress me, either; I shall demand a specification of a mechanism, and a demonstration that it is at least adequate to generate the complex system we are talking about, in the time available. To demand any less would be the height of irrationality.



The Eukaryotic Cell as an automated city: a perfect illustration of the STOMPS principle

There are two kinds of cells found in Nature: eukaryotic cells, which contain membrane-bound compartments, including a cell nucleus; and prokaryotic cells, which lack a nucleus. Plants, animals, fungi, slime moulds, protozoa and algae are all made up of eukaryotic cells, while prokaryotic cells are found in bacteria and archaea.

A typical prokaryotic cell (pictured above, top left) is remarkable enough, but a eukaryotic cell (pictured above, bottom left) is staggeringly complex. The evolutionary biologist Ernst Mayr, of Harvard University, once declared, "The evolution of the eukaryotic cell was the single most important event in the history of the organic world." And with good reason: for sheer ingenuity, the technology it employs is light years ahead of anything our best scientists can devise.

The following passage, which describes the complexity of a typical eukaryotic cell, is taken from pages 208-209 ("The Cell As An Automated City") of The Design of Life: Discovering Signs of Intelligence in Biological Systems by William A. Dembski and Jonathan Wells (Foundation for Thought and Ethics, Dallas, 2008). The authors gratefully acknowledge that the passage in their book is adapted from Evolution: A Theory in Crisis, by Michael Denton (Bethesda, Md.: Adler & Adler, 1985), pp. 328-329.

Magnified several hundred times with an ordinary microscope, as was available in Darwin's day, a living cell is a disappointing sight. It looks like a disordered collection of blobs and particles that unseen turbulent forces continually toss in all directions.

To grasp the reality of life as revealed by contemporary molecular biology, we need to magnify the cell a billion times. At that level of magnification, a typical eukaryotic cell (i.e., cell with a nucleus) is more than ten miles in diameter and resembles a giant spaceship large enough to engulf a sizable city. Here we see an object of unparalleled complexity and adaptive design.

On the surface are millions of openings, like the portholes of a ship, opening and closing to allow a continual stream of materials to flow in and out. As we enter one of these openings, we discover a world of supreme technology and bewildering complexity. We see endless highly organized corridors and conduits branching in every direction from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units.

The nucleus itself is a vast chamber a mile in diameter resembling a geodesic dome. Inside we see, all neatly stacked together in ordered arrays, coiled chains of DNA thousands and even millions of miles in length. This DNA serves as a memory bank to build the simplest functional components of the cell, the protein molecules. Yet proteins themselves are astonishingly complex pieces of molecular machinery. An average protein consists of several hundred precisely ordered amino acids arranged in a highly organized three-dimensional structure.

Robot-like machines working in synchrony shuttle a huge range of products and raw materials along the many conduits to and from all the various assembly plants in the outer regions of the cell. Everything is precisely choreographed. Indeed, the level of control implicit in the coordinated movement of so many objects down so many seemingly endless conduits, all in unison, is mind-boggling.

As we watch the strangely purposeful activities of these uncanny molecular machines, we quickly realize that despite all our accumulated knowledge in the natural and engineering sciences, the task of designing even the most basic components of the cell's molecular machinery, the proteins, is completely beyond our present capacity. Yet the life of the cell depends on the integrated activities of many different protein molecules, most of which work in integrated complexes with other proteins.

In touring the cell, we see that nearly every feature of our own advanced technologies has its analogue inside the cell:

Nanotechnology of this elegance and sophistication beggars all feats of human engineering.
(Bold emphases mine - VJT.)

Michael Denton, in his book Evolution: A Theory in Crisis (Bethesda, Md.: Adler & Adler, 1985, pp. 327-331), highlights two further unique features of the cell, which our current technology comes nowhere near replicating.

We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines...

However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours ...

Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated ...
(Bold emphases mine - VJT.)


There's a popular saying that seeing is believing. I would invite readers to check out these short videos that will allow them to judge for themselves whether the cell is intelligently designed or not. For anyone who accepts the STOMPS principle, the inference to design is unmistakable.

Powering the Cell: Mitochondria (2:09; no voiceover)

Molecular Biology Animations – Demo Reel (1:43; no voiceover)

The ATP Synthase Enzyme – exquisite motor necessary for first life (86 seconds; voiceover)

Programming of Life – Protein Synthesis (2:51; voiceover)

DNA Molecular Biology Visualizations – Wrapping And DNA Replication (3:07; voiceover)

Astonishing Molecular Machines – Drew Berry (6:04; TED talk)

Bacterial Flagellum (7:36; voiceover)


Most college students with a science background will probably be under the impression that science has already explained the origin of eukaryotic cells, thanks to the endosymbiotic theory put forward by the late Professor Lynn Margulis. (See also Endosymbiosis and The Origin of Eukaryotes by Professor J. Kimball.) Problem solved, right? Not so fast. The theory assumes the existence of a proto-eukaryotic cell in the first place, and it does a good job of explaining how that cell came to acquire mitochondria and chloroplasts, by symbiotic incorporation of prokaryotic cells. However, the theory makes no attempt to explain the stunning choreography and advanced nanotechnology of the eukaryotic cell, and it fails to explain the origin of the information stored in the cell's nucleus. That is the real puzzle.

Zack, I put it to you that intelligent is as intelligent does. If a single cell in your body is more complex than the most intricate designs found in human technology, then it is reasonable to infer that it originally had a superhuman Designer.



Leslie Orgel's second law: "Evolution is cleverer than you are."

Occasionally, when Intelligent Design proponents point out the complexity of cells, Darwinists will cite what Daniel Dennett has termed Leslie Orgel's second law: "Evolution is cleverer than you are." The relevant question here is: cleverer at what?

A few years ago, there was much ado in the blogosphere about a genetic algorithm which beat a team of human designers at finding the shortest networks of straight-line segments connecting a given collection of fixed points. Over at Panda's Thumb, Dave Thomas wrote a triumphant post entitled Design Challenge Results: "Evolution is Smarter than You Are".

There are certain kinds of problems which genetic algorithms can solve better than a team of human designers. This should not surprise us, any more than the fact that computers can beat human beings at chess. But this has little relevance to the STOMPS principle which I formulated above: if I observe a complex system which can perform a task in a manner which is more ingenious than anything our best and most promising scientists could have ever designed, then it would be rational for me to assume that the system in question was also designed. The emphasis here is on functionality, rather than optimization. Optimization techniques which lack foresight (such as genetic algorithms) can certainly refine existing functionality very well, but they are very limited in their capacity to create new functionality, for several reasons.

One reason is that functionality is usually a hit-or-miss affair: a system S either performs a given function (e.g. flying), or it doesn't. There is a fairly clear line between failure and success. What's more, the number of ways of failing typically dwarfs the number of ways of succeeding. There are many more ways not to fly than there are ways to fly. Odds are, then, that in the absence of foresight, any unguided change to a system that is currently incapable of performing a function (e.g. flying) will not be a helpful one.

A second reason is that there may not be a viable step-by-step path from a system's current state to the state it needs to be in, to perform a given function. It may be that the intermediate stages along the path are so maladaptive that the system would be destroyed if it ever tried out these stages.

A third reason is that the transformation from the system's current state to a state where it can perform the desired function may require multiple changes to occur in parallel - or at least roughly "in sync" with each other. As the number of changes that are required to occur "in sync" increases, the likelihood of the system evolving, within a given time interval, to a state in which it is capable of performing a given function, falls off dramatically.

A fourth and related reason is that a satisfactory solution (i.e. a system which performs the required function) may not be computable within the limited time available. For instance, if the function is a complex one, then algorithmic solutions, which do not employ long-range foresight, may take far too long to generate an answer. Darwinist mathematician Gregory Chaitin discussed this difficulty in a talk he gave at PPGC UFRGS (Portal do Programa de Pos-Graduacao em Computacao da Universidade Federal do Rio Grande do Sul.Mestrado), in Brazil, on 2 May 2011, entitled, Life as Evolving Software. Chaitin modeled three kinds of evolution:

(i) Intelligent Design, or evolution that is intelligently guided by an agent picking the sequence of mutations. This is the smartest possible kind of evolution, taking the shortest possible route to arrive at its goal in a time proportional to N, where N is the number of bits in the genome;

(ii) Darwinian evolution, or cumulative random evolution. This kind of evolution arrives at its goal in a time proportional to N-squared or N-cubed, where N is the number of bits in the genome;

(iii) exhaustive search, which proceeds by exploring all possibilities without remembering past changes, and which generates a new organism at random without ever being able to use any information from the current organism. This is much slower than both, and arrives at its goal in a time exponential of N, i.e. e^N.

Chaitin recalls being very excited to find that Darwinian evolution was nearly as good as Intelligent Design, requiring "only" N^2 steps as opposed to N steps. However, Chaitin's excitement proved to be short-lived, as he informs us in his talk:

... I told a friend of mine ... about this result. He doesn't like Darwinian evolution, and he told me, “Well, you can look at this the other way if you want. This is actually much too slow to justify Darwinian evolution on planet Earth. And if you think about it, he's right... If you make an estimate, the human genome is something on the order of a gigabyte of bits. So it's ... let's say a billion bits – actually 6 x 10^9 bits, I think it is, roughly – ... so we’re looking at programs up to about that size [here he pointed to N^2 on the slide] in bits, and N is about of the order of a billion, 10^9, and the time, he said ... that's a very big number, and you would need this to be linear, for this to have happened on planet Earth, because if you take something of the order of 10^9 and you square it or you cube it, well ... forget it. There isn't enough time in the history of the Earth ... Even though it's fast theoretically, it's too slow to work. He said, "You really need something more or less linear." And he has a point... (Bold emphases mine - VJT.)

A fifth reason is that if the system requires the co-ordination of multiple parts to perform a given function, a lot can go wrong. Darwinian evolution encounters a number of hurdles, which Professor William Dembski discusses in his essay, Irreducible Complexity Revisited (version 2.0, revised 2.23.04, pp. 30-31):

(1) Availability. Are the parts needed to evolve an irreducibly complex biochemical system ... even available?
(2) Synchronization. Are these parts available at the right time so that they can be incorporated when needed into the evolving structure?
(3) Localization. [C]an the parts break free of the systems in which they are currently integrated and be made available at the "construction site" of the evolving system?
(4) Interfering Cross-Reactions. Given that the right parts can be brought together at the right time in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the "construction site" of the evolving system?
(5) Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system?
(6) Order of Assembly. Even with all and only the right parts reaching the right place at the right time, and even with full interface compatibility, will they be assembled in the right order to form a functioning system?
(7) Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system?

Finally, I would like to ask: if genetic algorithms are as all-powerful as their proponents claim, then why do we not see scientists using them to solve the following real-world problems:

(i) the design of a device which can store nuclear waste over the long-term as securely as possible;
(ii) the design of a device which can remove CO2 from the atmosphere, thereby reducing the threat of global warming;
(iii) the design of Star Wars systems which can rapidly identify and shoot down incoming missiles;
(iv) the design of a spaceship which can take us to the stars;
(v) the design of vaccines which can cure diseases such as AIDS, which kill millions of people;
(vi) the design of an artificial ecosystem which could sustainably support a much larger population than we have now - say, one trillion people, instead of the current seven billion.

What is interesting to note is that nobody is even thinking of employing genetic algorithms to solve these problems. They all appear too "messy" to be solvable via some simple optimization technique which hones in on the best of a competing set of answers. Why, then, should we believe that these algorithms are capable of generating the diversity of life-forms that we find on Earth today?



My own conversion to Intelligent Design: a personal story

A few years ago, I came across an article by an Australian botanist (who is also a creationist) named Alex Williams, entitled, "Astonishing Complexity of DNA Demolishes Neo-Darwinism" (Journal of Creation, 21(3), 2007). At the time I knew very little about specified complexity and other terms in the Intelligent Design lexicon. I heartily dislike jargon, and I was having difficulty deciding whether there was any real scientific merit to the Intelligent Design movement's claim that certain biological systems must have been designed. But when I read Alex Williams' article, the case for Intelligent Design finally made sense to me. What impressed me most was that the coding in the cell was far, far more efficient than anything that our best scientists could have come up with. Here are some excerpts from the article:

The traditional understanding of DNA has recently been transformed beyond recognition. DNA does not, as we thought, carry a linear, one-dimensional, one-way, sequential code—like the lines of letters and words on this page... DNA information is overlapping-multi-layered and multi-dimensional; it reads both backwards and forwards... No human engineer has ever even imagined, let alone designed an information storage device anything like it...

(Bold emphasis mine - VJT.)

I'd like to make it clear that as someone who believes in a 13.7 billion-year-old universe and in common descent, I do not share Williams' creationist views. On these points, I think his arguments are questionable. But I do think that Williams is on solid scientific ground when he writes that no human engineer has ever even imagined, let alone designed an information storage device anything like DNA. Here we have an appeal to the STOMPS principle: DNA encodes information in a way which is far cleverer than anything that our most intelligent programmers could have designed, so it is reasonable to infer that DNA itself was designed by a superhuman intelligent agent. I would like to commend Alex Williams for conveying to the general reader, in accessible language, the sheer ingenuity of the way in which DNA encodes information.


I imagine that some readers will be reluctant to accept the scientific testimony of a creationist biologist like Alex Williams. So let me close with a quote from someone whose impartiality is not in doubt: Bill Gates, the founder of Microsoft Corporation, who is also an agnostic:

Biological information is the most important information we can discover, because over the next several decades it will revolutionize medicine. Human DNA is like a computer program but far, far more advanced than any software ever created.
(The Road Ahead, Penguin: London, Revised Edition, 1996 p. 228.)

If an agnostic like Bill Gates, who is an acknowledged expert on computing, thinks that the complexity of human DNA surpasses that of any human software design, then it is surely reasonable to infer that human DNA - or at the very least, its four-billion-year-old progenitor, the DNA in the first living cell, was originally designed by some superhuman Intelligence.


Modifying methodological naturalism - A simple proposal

So here's my question to you, Zack. Suppose we modify methodological naturalism by adding a second principle that can potentially over-ride it, like this:

1. By default, scientists should assume that a natural phenomenon has a natural explanation.
2. Principle 1 notwithstanding, scientists should assume by default that a complex system exhibiting features whose ingenuity surpasses anything that a team of intelligent human scientists could have created, was produced by a superhuman intelligent agent.

Would you agree that a combination of principles 1 and 2 would be methodologically sound from a scientific standpoint, Zack? If not, why not?

Zack, you may be wondering about the scientific legitimacy of inferring the existence of a supernatural agent. By itself, however, principle 2 will not get you to a supernatural agent, but only a superhuman intelligent agent. Only if there were limiting conditions which seem to preclude the possibility of the intelligent agent being a natural agent (e.g. if the mind-bogglingly complex system in question were the multiverse itself, which comprises the whole of Nature), would an inference to a supernatural agent be reasonable. At the present time, however, despite the fact that the constants of Nature - even the most fundamental ones, which a multiverse would still require - appear to be incredibly fine-tuned, no scientist has reported finding any large-scale structures at the cosmic level whose complexity matches or surpasses that of a city. The argument that the cosmos had a supernatural Designer cannot (at the present time) appeal to the STOMPS principle; it is an argument based on "inference to the best explanation," or what is known as abductive logic. Such an inference, while intellectually persuasive, is not as immediately compelling as the STOMPS principle I am appealing to here, which relies on the rational intuition that a complex system which is capable of performing a task in a manner which is more ingenious than anything our best and most promising scientists could have ever designed, is almost certainly the product of intelligent design.

You say Intelligent Design is a science stopper, Zack. I say it’s a science enabler. Over to you.


Table of Contents Part One Part Two Part Three Part Four Part Five Part Six Part Seven
Part Eight Part Nine Part Ten Part Eleven Part Twelve Part Thirteen Part Fourteen Conclusion