In his remarkable new book, A New Kind of Science, Stephen Wolfram asserts that cellular automata operations -- simple programs that run repetitively -- underlie much of the real world. He even goes so far as to say the entire universe is a big cellular-automaton computer. But Raymond C. Kurzweil, author and president of Kurzweil Technologies, challenges the ability of these ideas to fully explain the complexities of life, intelligence, and physical phenomena. Edited excerpts of his comments and criticisms follow. The full review, "Reflections on Stephen Wolfram's "A New Kind of Science", is available at Kurzweil's Web site.
Stephen Wolfram's A New Kind of Science is an unusually wide-ranging book covering issues basic to biology, physics, perception, computation, and philosophy. It is also a remarkably narrow book, in that its 1,200 pages discuss a singular subject, that of cellular automata.
It's hard to know where to begin in reviewing Wolfram's treatise, so I'll start with his apparent hubris, as evidenced in the title. A new science would be bold enough, but Wolfram is presenting a new kind of science, one that should change our thinking about the whole enterprise of science. As he states, "I have come to view [my discovery] as one of the more important single discoveries in the whole history of theoretical science."
So what is the discovery that has so excited Wolfram? It is cellular automata Rule 110 and its behavior. There are some other interesting automata rules, but Rule 110 makes the point well enough: It produces surprisingly complex patterns that do not repeat themselves. We see artifacts such as lines at various angles, aggregations of triangles, and other interesting configurations. The resulting pattern is neither regular nor completely random. It appears to have some order, but is never predictable.
Why is this important or interesting? Keep in mind that Rule 110 began with the simplest possible starting point: a single black cell. The process involves repetitive application of a very simple rule. From such a repetitive and deterministic process, one would expect repetitive and predictable behavior. But it doesn't show up.
I do find the behavior of Rule 110 rather delightful. However, I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We've seen this phenomenon in fractals (i.e., repetitive application of a simple rule to an image), chaos and complexity theory (i.e., the complex behavior derived from a large number of agents, each of which follows simple rules, an area of study that Wolfram himself has made major contributions to), and self-organizing systems (e.g., neural networks, which start with simple networks but organize themselves to produce apparently intelligent behavior). At a different level, we see it in the human brain, which starts with only 12 million bytes of data from the genome, yet ends up with a complexity that is millions of times greater.
Wolfram goes on to describe how simple computational mechanisms can produce all of the complexity we see and experience. He provides a myriad of examples, such as the pleasing designs of pigmentation on animals, the shape and markings of shells, and the patterns of turbulence (e.g., smoke in the air). This, according to Wolfram, is the true source of complexity in the world: "What I have come to believe is that many of the most obvious examples of complexity in biological systems actually have very little to do with adaptation or natural selection. And instead...they are mainly just another consequence of the very basic phenomenon that I have discovered...that in almost any kind of system many choices of underlying rules inevitably lead to behavior of great complexity."
THE COMPLEXITY QUESTION.
My own view is that this is only partly correct. I agree with Wolfram that computation is all around us, and that some of the patterns we see are created by the equivalent of cellular automata. But a key question is, Just how complex are the results of these automata?
There is a distinct limit to the complexity they produce. The many images in the book all have a similar look to them. Moreover, they do not continue to evolve into anything more complex, or develop new types of features. One could run these automata for trillions of iterations and the image would remain at the same limited level of complexity. They do not evolve into, say, insects, or humans, or Chopin preludes, or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles that we see in these images.
Complexity is a continuum. Wolfram regards all patterns that combine some recognizable features and unpredictable elements to be effectively equivalent to one another, but he does not show how such an automaton can ever increase its complexity, let alone become a pattern as complex as a human being.
A DOSE OF EVOLUTION.
So how do we get from these interesting but limited patterns to those of insects or humans or Chopin preludes? One concept we need to add is conflict, i.e., evolution. If we add another simple concept to that of Wolfram's simple cellular automata, such as an evolutionary algorithm, we start to get far more interesting and more intelligent results. Wolfram would say that automata and an evolutionary algorithm are "computationally equivalent." But that is true only on what I would regard as the "hardware" level. On the software level, the order of the patterns produced are clearly different, and of a different order of complexity.
An evolutionary algorithm can start with randomly generated potential solutions to a problem. The solutions are encoded in a digital genetic code. We then have the solutions compete with each other in a simulated evolutionary battle. The better solutions survive and procreate -- in a simulated sexual reproduction that creates offspring solutions -- drawing their genetic code (i.e., solutions) from two parents. The process is run for many thousands of generations of simulated evolution, and at the end one is likely to find solutions that are of a distinctly higher order than the conditions at the start.
The results of these evolutionary (sometimes called genetic) algorithms can be elegant, beautiful, and intelligent solutions to complex problems. They have been used, for example, to create artistic designs as well as for a wide range of practical assignments such as designing jet engines.
But something is still missing. Although genetic algorithms are a useful tool in solving specific problems, they have never achieved anything resembling the broad, deep, and subtle features of human intelligence, and in particular its powers of pattern recognition and command of language. We need to perform evolution on multiple levels. In addition to evolving better solutions, the genetic code itself needs to evolve. The rules of evolution need to evolve. Nature did not stay with a single chromosome, for example.
KEEPING IT SIMPLE.
For me, the most interesting part of the book is Wolfram's thorough treatment of computation as a simple and ubiquitous phenomenon. Of course, we've known for over a century that computation is inherently simple. The "Turing Machine" -- Alan Turing's 1950 theoretical conception of a universal computer -- provides only seven very basic commands, yet can be organized to perform any possible computation.
In what is perhaps the most impressive analysis in his book, Wolfram shows how a Turing Machine with only two states and five possible colors can be a Universal Turing Machine. For 40 years, we've thought that a Universal Turing Machine had to be more complex than this. Also impressive is Wolfram's demonstration that Cellular Automaton Rule 110 is capable of universal computation (given the right software).
The most controversial thesis in Wolfram's book is likely to be his treatment of physics, in which he postulates that the universe is a big cellular computer. Wolfram hypothesizes that there is a digital basis to the apparently analog phenomena and formulas in physics, and that we can model our understanding of physics as the simple transformations of a cellular automaton.
Others have postulated this possibility. Richard Feynman wondered about it in considering the relationship of information to matter and energy. Norbert Weiner heralded a fundamental change in focus from energy to information in his 1948 book Cybernetics and suggested the transformation of information, not energy, was the fundamental building block for the universe.
Perhaps the most enthusiastic proponent of an information-based theory of physics is Edward Fredkin, who in the early 1980s proposed what he called a new theory of physics based on the idea that the universe is comprised ultimately of software. We should not think of ultimate reality as particles and forces, according to Fredkin, but rather as bits of data modified according to computation rules.
The complexity of casting all of physics in terms of computational transformations proved to be an immensely challenging project, but Fredkin has continued his efforts. Wolfram has devoted a considerable portion of his efforts over the past decade to this notion, apparently with only limited communication with some of the others in the physics community also pursuing the idea.
A cellular node representation of reality may have its greatest benefit in understanding some aspects of the phenomenon of quantum mechanics. It could provide an explanation for the apparent randomness that we find in quantum phenomena. Consider, for example, the sudden and apparently random creation of particle-antiparticle pairs. Such randomness could be the same sort we see in cellular automata. Although predetermined, the behavior cannot be anticipated (other than by running the cellular automata) and is effectively random.
In summary, Wolfram's sweeping and ambitious treatise paints a compelling but ultimately overstated and incomplete picture. Wolfram has added to our knowledge of how patterns of information create the world we experience, and I look forward to a period of collaboration among Wolfram and his colleagues, so we can build a more robust vision of the ubiquitous role of algorithms in the world.
If Wolfram, or anyone else for that matter, succeeds in formulating physics in terms of cellular-automata operations and their patterns, then Wolfram's book will have earned its title. In any event, I believe the book to be an important work of ontology.