For me it all started back in the year 2000 when my wife gave me a copy of The Elegant Universe, Brian Greene’s excellent introduction to superstring theory. On reading the book, I was struck by how that theory had been developed: physicists had added extra dimensions to space until they obtained a coherent model that did not conflict with reality. This rang alarm bells in my head for a couple of reasons. First, these extra dimensions of space were so small as to be undetectable. As I discuss in a later article, I have a deep distrust of physical theories that depend on the undetectable. And second, the development process seemed a bit like fitting a polynomial curve to a finite set of data points–the more complicated you make your formula, the more closely you can fit your data. By adding enough terms to your formula you can always match your data exactly, even if it was actually generated by some other type of formula–or even if it was totally random. Likewise, by adding enough dimensions to your model of spacetime, you can incorporate all known physical processes, even if they might actually have a much simpler explanation.
I got the feeling that superstring theory may have fallen into this trap of “overfitting the data” and produced a model that is extremely powerful but unnecessarily complex. When I finished the book I was left with the impression that superstring theory is too general to be useful–it explains everything but predicts nothing.
But I did glean some useful ideas from reading Brian Greene’s book. In particular, the extra dimensions involved in superstring theory may be so small that they are limited to just a few bits of information; and some quantum properties seem to consist of pure information. Indeed, I soon realized that the “quantum uncertainty” we observe at the subatomic level could well be the results of trying to extract more information from a system than it contains. For example, measuring the direction of a single electron’s “spin” can produce only a single bit of information; when physicists try to infer any more they just become confused.
These thoughts led me to reflect on the nature of information, and how it might relate to matter and energy. I started reading more about physics, and learned about the Pauli exclusion principle, which states that no two electrons in an atom can share the same quantum state. (A quantum state can be defined as a set of numbers.) Thus each possible quantum state would either be “on” (occupied by an electron) or “off” (unoccupied), like the binary digits of a computer’s memory. This rang a different sort of bell in my head, as I realized that an atom could be considered as a repository for information. Furthermore, this finite quantity of information seemed to specify the atom’s state completely.
This led me to wonder about the following question:
Could the Universe be comprised of pure information?
If so, that led me to the following chain of reasoning:
1. Cosmology tells us that the Universe contains only a finite amount of matter;
2. Physics tells us that all matter consists of finite particles; and
3. Quantum Mechanics tells us that each particle only contains a finite amount of information.
Conclusion: The Universe only contains a finite amount of information.
Since (as I discuss here) any finite amount of information can in principle be represented as a single integer, I wondered whether the Universe was, in fact, defined by such an integer. In other words, could the precise state of the Universe at a given instant be completely specified by a single enormous integer? And since (as I discuss here) space-time in such a Universe could not be continuous, I wondered if the history of the Universe could be specified as a sequence of such integers. Each number in the sequence would define the state of the Universe at a given instant of time, and their progression would define the evolution of the Universe from one moment to the next. This progression could in turn be defined by a formula that encoded the laws of physics. Knowing the formula, one could (in principle) program a computer to calculate the entire history of the Universe--past, present and future!
This line of thought led me to the following speculation:
Could this Universe be a simulation?
But, as I discuss here, I ended up rejecting this idea because it would require a computer larger than the Universe to calculate the progression of numbers from one moment to the next. So, if the Universe was in fact defined by a progression of numbers, it would have to be a progression that existed naturally, without needing a computer to calculate it. Of course, such a progression does exist: the Natural numbers 1, 2, 3 and so on.
I then realized that my previous idea had been too complicated. My numbers did not have to describe the entire Universe; that was asking them to do too much work. All I had to do was to find a formulation that would allow each Natural number to describe the operation of a tiny little bit of the Universe for a tiny instant of time. Somehow, that seemed a much more manageable task for an individual number. And yet it would still allow the series of Natural numbers (1, 2, 3, ...) to define the entire history of the Universe, by letting the smallest numbers describe the earliest moments of time. Of course, since the Universe is billions of years old, the numbers it’s using by now must be absolutely ENORMOUS.
By early 2008, although I had not yet been able to find a formulation, my ideas had crystallized to the point where I was able to discuss them in a more-or-less coherent manner. One such discussion gave birth to the idea of sharing my thoughts with the world by putting them on the Internet. This site first went live in June of 2008, and I have been developing and adding articles to it since then, as time and inclination have permitted. I still haven’t come up with the formulation that would be the holy grail of PQR Theory, but if this site helps me or someone else to find it, then it will have done its job.
THE ORIGINS OF PQR
Nick Mitchell, July 2009