Feb 13, 2021

Machine learning and artificial intelligence are increasingly taking the stage, with huge philosophical implications. We have been following this issue in our RSF science blog, first through the article Between the Holographic Approach and Data Science where we addressed the potential of trained artificial neural networks to replace our scientific models, and the possibility of reality being a numerical simulation was discussed. Somehow we had anticipated the work from Vitaly Vanchurin, from the University of Minnesota Duluth, proposing that we live in a neural network and affirming that only through neural networks we could find the theory of everything and grand unification theory. So, our second article entitled Is the universe a Neural network? addressed this later possibility.

Today it was published in Phys.org an article entitled New machine learning method raises question on nature of science which combines the issues raised by both RSF articles but differs from them in that the author of the study, physicist Hong Qin (from the U.S. Department of Energy's Princeton Plasma Physics Laboratory) has devised a novel algorithm based on discrete field theory (that just requires a small amount of sample data) which would replace the former machine learning methods, particularly the algorithm used, which is neural networks (that require extensive training with huge amounts of data points in order to reach a satisfactory prediction).

The results are very impressive. As the abstract of his article entitled *Machine Learning and Serving of Discrete Field Theories*, published in Nature says “The effectiveness of the method and algorithms developed is demonstrated using the examples of nonlinear oscillations and the Kepler problem. In particular, the learning algorithm learns a discrete field theory from a set of data of planetary orbits similar to what Kepler inherited from Tycho Brahe in 1601, and the serving algorithm correctly predicts other planetary orbits, including parabolic and hyperbolic escaping orbits, of the solar system without learning or knowing Newton’s laws of motion and universal gravitation. The proposed algorithms are expected to be applicable when the effects of special relativity and general relativity are important.”

This remarkable achievement is extremely useful, and could soon replace all scientific models, opening the question about the nature of science itself as addressed in the Phys.org article: “Don't scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren't theories fundamental to physics and necessary to explain and understand phenomena?”

In response to that essential question, Qin says: "I would argue that the ultimate goal of any scientist is prediction. You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit”. Such detachment from science as I understand it, really surprised me, and raised a deeper question regarding not nature of science, but nature of reality. Science evolved from natural philosophy; the way of inquiring and questioning not only the observation, but also the observer, was later quantified through equations, reproducibility of the phenomenon, and the development of sophisticated technology and models. At some point in its evolution, science seems to have diverged completely from philosophy, to the point that the models could have no meaning at all. Most of my resistance to comply with our current modern physics, is precisely the lack of meaning, together with an overwhelming avalanche of intricated mathematics and programming. Our theories lack rooting and contention, which is a poetic way of saying that the theory of the very small (quantum theory), and that of the very big (relativity and gravity) are essentially disconnected. If to understand reality in depth, humanity needs to understand these models… well… most will probably be doomed. Our scale, the human scale, is orphan.

As renowned physicists Sabine Hossenfelder says in her article, “Physicists face stagnation if they continue to treat the philosophy of science as a joke ” (see also her blog). Of course, one may argue that science is not meant to deal with nature of reality, just nature of … nature. If that were the case, I wonder how the topic of consciousness could be addressed from the merely physical point of view. And at this point, I am also wondering what I mean by physical.

I guess Qin’s position on the matter is somehow influenced by his inspiration on Oxford philosopher Nick Bostrom's thought experiment, where the universe is a computer simulation (known as the simulation conjecture). If this is true, fundamental physical laws should reveal a universe consisting of individual units of space-time, like pixels in a video game, or even like the PSU (Planck Spherical Units) in the Generalized Holographic Model derived by Haramein to solve quantum gravity.

Regarding Qin’s comment "If we live in a simulation, our world has to be discrete," this is partially correct. If the universe is a numerical simulation, the starting building blocks must be discrete units. But that does not mean that the opposite direction holds true, meaning that if the starting points are discrete units then the universe is a numerical simulation, and this is precisely what we address in detail in both RSF former articles (here and here). There is a beautiful and unappreciated point here: in our reality, we exchange things, so we need integers, if not, how could we exchange anything? How can I hand you an unbounded apple? Or you give me an unbounded flower? We need boundary conditions in order to exchange. And nevertheless, if we went inside the apple, we could reach the Planck scale, which is 34 orders of magnitude smaller! It is as if the apple was infinite from within!

Discreteness (as opposite to continuous) is necessary if we have exchange of information (or energy), and to have exchange, we need differentiation between parts, i.e., a certain degree or level of separation. Ironically, if we trace the origin of the discreteness of this energy exchange, we go all the way into the deepest level in mass, we reach the Planck scale and the famous Einstein-Planck relation *E* = *hf*, where *E* is energy, *h* is Planck constant, and *f* is the frequency of the oscillating particle. Curiously enough, the energy exchange of such oscillator can only happen in integer amounts of *hf*, hence the word *quantization*, meaning that I am counting the integer amounts of *hf* exchanged. It is ironic that it is precisely this quantization (which is not proper to matter, but to space, as this work from Qin hints, and that of Haramein explains accurately) what could make the scientific community believe that we live in a numerical simulation!

"What I'm doing is replacing this model (the physics equation) with a type of black box that can produce accurate predictions without using a traditional theory or law." – Hong Qin.

As Qin explains, the payoff is that the network learned the laws of planetary motion after witnessing very few training examples; his code really 'learns' the laws of physics, and in this sense the technique could also lead to the development of a traditional physical theory.

Qin’s collaborator, Palmerduca, says here, "While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one. When you're trying to deduce a theory, you'd like to have as much data at your disposal as possible. If you're given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set."

Meanwhile, Vanchurin’s proposal applies the methods of statistical mechanics to study the behavior of neural networks, and he found that in certain limits the learning (or training) dynamics of neural networks is very similar to the quantum dynamics. He starts with a precise model of neural networks to study the behavior of the network up to the limit of a large number of neurons that is somehow mimicking the pass from a state of quasi equilibrium (a quantum state), to a state far from equilibrium (a classical state). And this is precisely how the world around us seems to works, and his model too. Additionally, we know that the quantum scale works for the very small scales, while relativity works for the very large scale, so his model could address this issue as well, connecting them “fluidly”.

Due to the success that quantum physics has had in many regimes and given the fact that the very big is composed of the very small, most physicist would agree that quantum mechanics is the main theory and everything else emerges from it. Vanchurin considers a different approach: that a microscopic neural network is the fundamental structure and everything else, i.e., quantum mechanics, general relativity, and macroscopic observers, emerges from it. And the main reason for it, comes from the fact that neural networks are extremely efficient to achieve emergent properties.

It is not clear how emergent properties would rise from Qin’s model, seems like both proposals could complement each other, and a combination of both could give rise to the so-called grand unification theory, though it may take a considerable amount of time to reach that stage. It is also not clear how both models, Qin’s and Vanchurin’s, could provide a comprehensible coherent unification theory relying on physics equations describing mechanism and processes (as we are used to in current physics) if they are both black boxes.

The good news is that with the Generalized Holographic model, we are already there!

**RSF in Perspective**

Regarding the example of an apple as a finite exchangeable object, here is where the magic comes into play … the space between objects or the separation between objects can be interpreted as a discontinuity in space. These objects disrupt the continuity of space, as if they emerged from a division of space. Additionally, even in a vacuum, the space between objects is not just dividing; it is also connecting since it is filled with EM oscillations (the quantum vacuum fluctuations) which we know have macroscopic effects, as proved so many times through the Casimir effect, such as proposed here.

Therefore, instead of trying to see how far the division in matter goes (like colliding particles to reach the smallest or most fundamental), Haramein’s intuition says we should look instead for the fundamental patter of division of space. This is what he achieves with the fundamental holographic surface-to-volume ratio Φ. This solution to quantum gravity that allowed to compute the proton charge radius within experimental accuracy (while the Standard model is off by 4%) has further implications for this pattern of division of space, having to do with fractalization of space, emergence of mass, forces, and fields. His latest paper, entitled Scale invariant Unification of forces, fields and particles, in a quantum vacuum plasma (to be published soon) will provide us a meaningful unifying theory, instead of the black box theory.

50% Complete

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.