Science Events About Research Courses BECOME A MEMBER Login

The Vacuum Catastrophe

By Dr. Inés Urdaneta, Physicist, Research Scientist at Resonance Science Foundation

One of the largest discrepancies found in modern physics is the ~122 orders of magnitude difference (i.e., 122 zeros!) between the vacuum energy density estimated by observations at the cosmological scale (a density which is represented by the cosmological constant) and the quantum vacuum energy density at the Planck scale as calculated or predicted by quantum physics.

Just to grasp the magnitude of this difference of 122 zeros we must recall that each position in a number refers to an order of magnitude. For instance, 10 is one order of magnitude bigger than 1, and 100 is two orders of magnitude bigger than 1. As we keep adding zeros, we see an increase called exponential. From this perspective, the size of a proton is of the order of 10-15 m (this means that compared to a ruler the length of a meter, the proton is 15 orders of magnitude smaller, or a quadrillion times smaller) and the diameter of the universe is approximately 1024m, which is 24 orders of magnitude bigger than the ruler (or septillion times bigger than the ruler), and hence is 24+15 = 39 orders of magnitude larger than the proton. After rounding, the universe is 40 orders of magnitude larger than a proton. Therefore, 122 orders of magnitude difference is even larger!

Image courtesy of Astrophysicist Dr. Amira Val Baker.

To properly address the Vacuum Catastrophe, we must first understand its origin. The huge discrepancy of 122 orders of magnitude between the vacuum energy density at the quantum scale and the vacuum energy density at the cosmological scale expresses a serious incompatibility between general relativity (Einstein field equations) and quantum theory (Quantum Field theory equations). This means that we have one kind of physics that explains and predicts the very small scale, and another very different theory that explains the very big scale. This is inconsistent because the very big is composed of the very small; therefore, there must be a coherent transition bridging them.

To understand this discrepancy and why the generalized holographic model solves it, in the following we address how General Relativity and Quantum physics estimate the vacuum energy density at the cosmological and quantum scales, respectively. Then, in a second article, we will compare these estimations to the exact prediction for the vacuum energy density given by the generalized holographic approach.

 

Vacuum energy density determined by General Relativity

General relativity is the geometric theory of gravitation published by Albert Einstein in 1915, and it remains the current model of gravitation in modern physics. The core of General Relativity is founded on the famous Einstein field equations (EFE) that describe the curvature of spacetime caused by whatever matter and energy are present.

The common analogy is that of a rubber sheet representing a deformable spacetime; a bowling ball mass placed on it creates a cuplike depression. Objects curve the sheet more, or less, depending on their mass, as seen in the image below.

If a marble was placed near this depression created by the bowling ball, it rolls down the slope toward the bowling ball as if pulled by a force. In addition, if the marble is given a sideways push, it will trace an orbit around the bowling ball, as if a steady pull toward the ball is swinging the marble along a closed path.

In this view, any region distant from massive cosmic objects like stars has an non-curved spacetime; the rubber sheet is absolutely flat. In that case, if one were to probe spacetime in that far region by sending out a ray of light or a test body, both the ray and the body would travel in perfectly straight lines, like a child’s marble rolling across the rubber sheet.

Newton thought that gravity was a force; any two masses in the Universe would instantaneously attract one another via a mutual force known as gravity. And he found out that the more massive each mass was, greater the attractive force between them. The distance (squared) between the masses played a fundamental role as well; the farther away they were, the lesser the force.

In view of these observations, Newton discovered the first law of universal gravitation, that he published in Philosophiae Naturalis Principia Mathematica, on July 5 (1687), shown in the figure below, where G is a proportionality constant that was established empirically, called the gravitational constant:

Image by Dennis Nilsson, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=3455682

Centuries later, Einstein found a deeper truth: that this force called gravity arises from the shape of spacetime.

Einstein field equations (EFE), first published by Einstein in 1915, provide a description of gravity as a geometric property of space and time. Analogously to the way that Maxwell’s equations relate electromagnetic fields to the distribution of charges and currents, EFE relates the spacetime geometry to the distribution of mass–energy, momentum (in simple words, velocity), and stress (tension) in spacetime.

As explained here, to compute the spacetime curvature at any point in space using General Relativity you need to know the locations, magnitudes, and distributions of all the masses in the Universe, and you also need information about:

  • how those masses are moving and how they've moved over time,
  • how all other (non-mass) forms of energy are distributed,
  • how the object you're observing/measuring is moving in a changing gravitational field,
  • and how the spatial curvature is changing over time.

Inserting such information (which is mainly obtained through observations) into Einstein’s Field Equations, you can compute the curvature of spacetime, which is necessary to calculate the expansion rate of the universe and the rotational speed of galaxies and compare them to the astronomical observations.

In technical terms, this means that Einstein’s field equations are a set of equations (specifically, non-linear partial differential equations) which can be expressed in a synthesized form as a single equation:

where the first subindex μ represents the spacetime coordinates and the second subindex ν represents the momentum coordinates (i.e., the change of the spacetime coordinates - in simple terms, position - with respect to time). G is the gravitational constant, c is the speed of light, Rμν is called the Ricci curvature tensor, gμν is called the metric tensor,  is the scalar curvature and Tμν is called stress-energy tensor. This equation includes the constant Λ, known as the cosmological constant, to account for an additional source of energy. Λ represents an additional expanding (dark energy) force. The figure below depicts the terms in the equation above and their meaning.

The existence of dark energy and dark matter was inferred so that Einstein’s Field Equations could correctly predict the expansion of the universe and the rotation velocity of galaxies. In this view, dark energy is the source of an expanding force in the universe (it is what accounts for the Hubble constant in the leading theories), while dark matter provides an additional gravity source necessary to stabilize galaxies and clusters of galaxies, since there is not enough ordinary mass to keep them together given the accelerated expansion of the Universe. This additional gravity would also explain the rotation velocity of galaxies.

Roughly speaking, the left side of the equation in the figure above expresses the geometric deformation of spacetime produced by the energy-mass contribution on the right side of the same equation. This deformation of space also accounts for the gravitational waves recently detected by LIGO in 2015 emanating from the merging of two black holes.

As physicist John Wheeler claims, “Space-time tells matter how to move; matter tells space-time how to curve.”

What energy and mass contributions are curving spacetime? In the following, we will address this important question, the answer to which is closely related to the Hubble constant H0.

 

The Hubble Constant H0

Before the expansion of the universe was confirmed, it was believed that the universe was static and composed of the standard matter we are all familiar with, known in technical terms as baryonic matter. This matter is composed of atoms, which are composed mainly of protons and electrons. Since protons are where most of the mass is located, the mass of the atoms is basically the mass of these protons (and neutrons, which, in an unbound state, decay into protons). In the Standard model of particle physics, protons and neutrons are classified as baryonic particles; therefore, all mass should be baryonic in nature, i.e., composed of protons, electrons and neutrons.

After cosmological observations confirmed there was an accelerated expansion of the universe, baryonic matter was not enough to account for the additional gravitational pull that would keep galaxies from tearing apart during an accelerated expansion. Something seemed to be missing, or was EFE wrong?

Although widely attributed to Edwin Hubble, the notion of the universe expanding at a determined rate was first derived from EFE equations in 1922 by Alexander Friedmann. Friedmann published a set of equations, now known as the Friedmann equations, showing that the universe might expand, and he presented the expansion speed if that were the case. Then, in a 1927 article, Georges Lemaître independently showed through mathematical derivations that the universe might be expanding, and by observing the proportionality between the recessional velocity of distant cosmological objects and their distances from Earth, he suggested an estimated value for the proportionality constant.

When Edwin Hubble confirmed the existence of this expansion two years later and determined a more accurate value for the expansion rate, this constant was named after him, the Hubble constant H0. Hubble inferred this expansion by observing the light emitted by galaxies and found that galaxies are moving away from the Earth at speeds proportional to their distance. In other words, the farther galaxies are from Earth, the faster they are moving away from Earth. The light emitted by galaxies shifts toward the red end of the electromagnetic spectrum (i.e., to larger wavelengths or slower frequencies) when moving farther away from Earth. This light shift to the red end of the spectrum is also referred to as redshift

H0 is therefore the constant of proportionality between the distance D from Earth to a galaxy, measured in Megaparsecs (1 megaparsec = 3.09×1019 km), which can change over time at a speed of separation from Earth of vr (or recession velocity; the change of D with respect to time) usually given in units of kilometers per second. This is expressed mathematically as H0 = v / D, and this equation is known as Hubble’s law, or the Hubble-Lemaitre Law (see Figure below). The Hubble constant is most frequently given in units of (km/s)/Mpc, thus giving the speed v in km/s of a galaxy which is 1 megaparsec away, and its value is close to 70 (km/s)/Mpc.

However, in SI units (International System of Units), H0 is simply s−1 (the inverse of a second) and the SI unit for the reciprocal of H0 is simply the second, also known as the Hubble time. The Hubble constant can also be interpreted as the relative rate of expansion.

Image courtesy of Dr. Val Baker.

With Hubble’s law, which gives the Hubble constant H0, we can calculate the cosmological vacuum energy density, also known as the critical density of the Universe ρcrit, using the expression ρcrit = 3 H02 / (8 π G) where G is the gravitational constant and Hubble’s constant is the current observed value of H0 = 67.4 (km/s)/Mpc, with an uncertainty of 0.5 (km/s)/Mpc, giving ρcritρvac = 5.83 x 10-30 g/cm3.

To clarify, the critical limit ρcrit is the amount of radiant energy (expanding or negative, often called dark energy) necessary to expand the Universe at the rate we currently observe, and it is also the vacuum energy density at the very large, cosmological scale ρvac.

Once it was confirmed that the universe was expanding at an accelerated rate and that the rate of expansion was given by Hubble’s law, EFE were no longer able to explain these cosmological observations. It was thus inferred that there was a missing mass-energy contribution and it was named dark matter and dark energy, which must be inserted into the EFEs for these equations to correctly predict the expansion of the Universe and the rotation velocity of galaxies. Dark mass is inserted as a mass-energy density contribution to the stress Tensor T, and it accounts for the additional source of gravity that could explain the rotational velocity of galaxies, as well as preventing galaxies from tearing apart because of the accelerated expansion of the Universe. Dark energy, on the other hand, accounts for this accelerated expansion and is inserted into the EFE as the cosmological constant Λ (for more clarity please go back to the first Figure in this section, where the terms T and Λ are explained).

 

 Vacuum energy density determined by Quantum Field Theory

At the quantum scale, quantum field theory (QFT) describes a vacuum composed of an infinite number of electromagnetic fields which are randomly fluctuating at all frequencies (also known as vacuum fluctuations). The random vibrations as described by QFT have an energy spectrum that is quantized by a quantum harmonic oscillator. The energies that a quantum harmonic oscillator can have are dictated by this equation below :

where n is an integer that expresses the quantization of the harmonic oscillator (n = 0,1,2,3….) at each angular frequency ω. In the expression above, the Einstein-Planck relation for energy is written as E = ħω (in terms of the reduced Planck constant ħ and the angular frequency ω) instead of the expression we’ve been using, E = hf , in terms of the Planck constant h multiplied by the oscillation frequency f.  Since ħ = h/(2π) and  ω = 2 π f , we can easily verify that ħ ω = h f, so these expressions for E  are equivalent.

The figure below shows these energies E0  , E1 , E2 and so on as n increases, for a fixed frequency.

In the figure above we appreciate that the minimum energy that a quantum harmonic oscillator has (corresponding to the fundamental state of the quantum harmonic oscillator at n=0) is not E0  = 0 but E0  = (ħω)/2, for any angular frequency ω. The value E0  is known as zero-point energy (ZPE); the energy of the vacuum fluctuations at the quantum scale at each frequency. Image by Allen McC. at German Wikipedia - Eigene Darstellung, Public Domain, https://commons.wikimedia.org/w/index.php?curid=11542014

This feature of the lowest energy E0 with nonzero value is one of the main contributions of quantum mechanics. This is the crucial difference between the classical and the quantum harmonic oscillators: for the classical harmonic oscillator at rest there is no displacement, and its energy (more precisely, kinetic energy) is zero (Ex=0 = 0), while the quantum harmonic oscillator is never at rest; it always has a residual vibration E0 = (ħ ω)/2 for each value of ω, taken as the intrinsic energy of the quantum vacuum. This also relates to the uncertainty in quantum mechanics.

Through the Casimir effect we have experimental proof of the existence of these zero point oscillations. QFT thus calculates the quantum vacuum energy density at each point in space by summing the energies across all vibrations f or rotational frequencies ω. Since in principle there is no limit to the frequencies f or ω, there are an infinite number of them to sum. Summing the contributions of all possible frequencies yields an infinite energy density at each point in space, unless it’s renormalized at the Planck cutoff value, as seen in the Figure below.

Image courtesy of Dr. Val Baker

Therefore, instead of using the sum over all frequencies, QFT determines a vacuum density at the Planck scale, dividing the Planck mass  ml = 2.18 x 10-5 gr, by the Planck volume, which is a cube with V = l3 (l being the Planck length l = 1.616 x 10-33 cm).

This gives a value for the quantum vacuum energy density of  , a value which is supported by both theory and experimental results.

Comparing the values of both energy densities, we observe the humongous difference in orders of magnitude between the quantum (≈1093) g/cm3 and cosmological (≈10-30g/cm3) energy densities of the vacuum. This discrepancy is a significant ≈ 122 (or exactly 93+30 =123) orders of magnitude difference between the cosmological vacuum and quantum vacuum, and thus is known as the ‘vacuum catastrophe’. Dr. Amira Val Baker

 

RSF in perspective:

This discrepancy is due to a misunderstanding about vacuum energy, its density, and its role in the cosmos, from the microcosmos to the macrocosm. The transition from quantum scale up to universal scale expresses a gradient vacuum density between scales that we will address in more detail in a next article explaining the solution to this problem. The complete calculation appears in a paper entitled “Resolving the Vacuum Catastrophe: A Generalized Holographic Approach,” by Nassim Haramein and Dr. Amira Val Baker, published in the Journal of High energy Physics, Gravitation and Cosmology, in 2019.

 

... a side note on entropy and matter creation in an open system

At this point it is critical to mention the work "Thermodynamics of cosmological matter creation" (1988) from I. Prigogine et al. proposing a type of cosmological history that includes a large-scale entropy production based on a reinterpretation of the matter-energy stress tensor in Einstein's equations which modifies the usual energy conservation laws, thereby including irreversible matter creation. This creation process corresponds to an irreversible energy flow from the gravitational field to the created matter constituents when considering the thermodynamics of open systems in the framework of cosmology. This work shows that under such conditions, the second law of thermodynamics requires that space-time transforms into matter, while the inverse transformation is forbidden. From this work it appears that the usual initial singularity associated with the big bang is structurally unstable with respect to irreversible matter creation.

The corresponding cosmological history therefore starts from an instability of the vacuum rather than from a singularity. The instability at the origin of the universe is the result of fluctuations of the vacuum in which black holes act as membranes that stabilize these fluctuations. In short, black holes will be produced by an "inverse" Hawking radiation process and once formed, will decompose into "real" matter through the usual Hawking radiation. In this way, the irreversible transformation of space-time into matter can be described as a phase separation between matter and gravitation in which black holes play the role of "critical nuclei". This phase separation is basically happening in the event horizon of a black hole, as the coming paper by Nassim Haramein, entitled Scale invariant unification of forces, fields and particles in a Planck vacuum plasma, will explain in detail. 

Note: This RSF article is inspired in material written by Dr. Val Baker, and it is part of section 7.2 of our Unified Science Course.

Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.