Image credit here
"I regard consciousness as fundamental, matter is derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness. There is no matter as such; it exists only by virtue of a force bringing the particle to vibration and holding it together in a minute solar system; we must assume behind this force the existence of a conscious and intelligent mind.
The mind is the matrix of all matter" - Max Planck
Consciousness has been a controversial subject within science, as it is not just about explaining a particularly complex phenomenological state of the brain – it pierces right to the heart of our conception of the material world. An investigation of the nature of consciousness, as it turns out, is inextricably linked with the exploration of the nature of reality. This is epitomized in the centuries-old adage “if a tree falls in the forest, and no one is around to hear it, does it make a sound?” To what degree and extent is objective reality dependent on the observer? Clearly, most of us would agree that of course it makes a sound, as sound is just mechanical vibrations propagating in air molecules.
Yet, this question has resurfaced in the form of Schrödinger’s cat, posed in part to demonstrate the non-physical nature of the Heisenberg-Bohr model of quantum theory, also known as the Copenhagen Interpretation, which is the predominant view of the quantum mechanical model. Such models arose from attempts to interpret the physical mechanisms of the famous double slit experiment, which were considered by some physicists to have no classical explanation.
The double slit experiment
The Copenhagen interpretation has led to the inference that, at the quantum scale, the measurement determines the outcome of the observed event and that observer and the observed can be isolated from the system they are embedded in (i.e., all other frames). In quantum mechanics, the state of a particle (i.e., its position, energy, velocity, and evolution in time) is described by its wave function 𝜓(x,t), where x stands for the coordinates (in this case, in one dimension, or along the x axis), and t stands for time. During an experiment to determine the state of a quantum particle, the measurement itself supposedly produces the reduction of the probability amplitude (known as collapse of the wave function) into a definite event. Before the collapse or measurement, the original wavefunction 𝜓(𝑥,𝑡) is composed of a superposition of “probable outcomes”, each one weighted by a certain coefficient called the probability amplitude (which represents the probability of finding the particle at that particular state) and by performing an experiment, the setup has “projected the measurement” into one of those outcomes, whether there are conscious beings observing it or not. This is then interpreted as a particle having no real physical existence until it is somehow observed or measured, providing a role for the observer as a "projector of the reality around him." This is a misinterpretation of quantum mechanics, since the collapse of the wavefunction does not depend on an agent of awareness (a conscious being) observing it; any interaction with the environment will result in decoherence. Additionally, recent experimental studies find a different interpretation of the double slit experiment based on pilot waves in the fluid dynamics of classical systems  .
The concept that an observer generates the reality in which an event occurs, such as the sound emission of the falling tree, assumes an isolation of the frame of reference relative to the event. That is, all interactions in the system, like the air molecules, the birds in the neighboring tree, the microbial life all around and in the tree and so on, can all be considered subsystem frames - “observers” – experiencing the event from different perspectives, a point recently expounded by others, as well .
Is there a mechanism by which the relationship between reference frames generates a collective behavior that eventually evolves into a state of self-awareness?
There is increasing interest and concern about the nature of reality, and many scientists and technological authorities, such as Elon Musk, have claimed that it’s very probable that we may live in a numerical simulation, or that the universe is a neural network.
Image credit here.
Additionally, many scientists and neuroscientists work under the assumption that consciousness is an epiphenomenon of the brain … How can we tell if consciousness is an emergent property of the brain’s complexity (hence, it is mainly subjective, one could even call it an illusion), or is it part of a larger process and perspective?
Even from an evolutionary perspective, if consciousness is simply an epiphenomenon accident of brain neurology, the question remains, why did consciousness evolve at all? Many researchers have demonstrated that pure algorithmic computational functions are completely sufficient to determine everything from the complex behavior of flocking birds and shoals of fish to simple behaviors of avoiding prey, finding food, and mating. As such, wouldn’t the evolutionary development of consciousness, and certainly self-awareness, be superfluous? Following the consensus model of evolution and development, consciousness would not arise, and indeed this is what many theorists have posited  . Essentially asserting that consciousness is just an illusion, wherein the illusory experience of what we call consciousness occurs after all mechanical and computationally determined processes have occurred, suggesting that we are no more than genetically preprogrammed automatons, as are all other organisms.
Such inferences result from considering systems reduced to their component subunits, and thus largely fail to consider the emergent properties of systems which have complex interactions at different scales. From the organismal to the molecular level, living systems may have synergistic, non-linear interactions that are beyond what a linear summation of the parts would predict. Concerning non-trivial quantum states occurring in the biological system, when such nonlocal quantum phenomena are extended to the macromolecular environment, unpredictable systems behavior may emerge, and may be a critical aspect of information, awareness, and sentience processes of living matter. In this context and for the purpose of defining the local and nonlocal interactions as a collective behavior as well as generalizations of entanglement , the holographic principle , and emergent properties, we should describe the living system as a whole greater than the summation of its parts, referred to as synergetics , which results in non-predictable (non-mechanistic) behaviors and properties.
At this point, it is important to say that quantum computing and artificial intelligence (mainly artificial neural networks) are reaching capabilities through deep machine learning such that they can naturally achieve something standard physical theories have a rough time with: expressing emergent properties. Emergent properties, as mentioned before, are those new properties that appear in a collective of individuals which are not necessarily present in any integrand of the collective. Even consciousness could be an example of an emergent property.
AI and artificial neural networks are exceptionally good at correlating variables, but at a very high price… we gain information on the what at the expense of the how and why. It becomes a black box because it reveals and weighs correlations between variables, but the process explaining how and why those variables are correlated is highly nonlinear and gets imprinted in the neural network. It becomes the network itself. Software and hardware mix; they are no longer separate entities. They too get correlated and become a complex system. The other way around also holds true: the science of complex systems is increasingly being approached through machine learning and neural networks. And given the difficulty and almost ‘impossible to solve’ complexity of our current physical models – for instance, the standard model of particles – it would not come as a surprise that we started doing AI to fill in the gaps and repair the inconsistencies in mainstream theories, or even to achieve the theory of everything. In our quest for more precise results, we would lose richness in theoretical modeling and mechanisms.
One of the most important implications of the surface-to-volume holographic ratio found by Nassim Haramein in his generalized holographic model -which provides a solution to quantum gravity-, is the possibility that consciousness is not localized in the brain, and that instead, the brain, and the rest of the body, are receivers / transmitters wired to a universal information network, and complexity would emerge from the feedback-feedforward mechanism between them. If so, then consciousness would not be limited to a localized region in matter nor it would be subjective to a single individual; it would permeate all things, including the space in between, and the feedback feedforward mechanism would be a source of organization and structure. In this sense, consciousness could be considered real, instead of an “illusion” or maya, because it is not only inside the brain but also outside, being an active agent in nature and reality.
Therefore, the physical meaning provided by the Generalized Holographic Model is fundamentally important. It’s entirely analytical and geometrical solutions allow us to keep track of the mechanisms and physical phenomena involved in the nature of reality. It may be that awareness is a non-local mechanism arising from, and intrinsic to, the interactivity of Planck-scale spacetime geometry that enables nonlocal and nonlinear signal integration (we refer to this multiply connected spacetime geometry as the micro-wormhole network), forming an integral function within physical and biological processes. The information dynamics underlying physical processes would therefore require a self-organizing framework emerging from characteristics such as inter-communication, memory, hysteresis, iterative feedback mechanisms, retro-causal influences, and local and nonlocal interactions, which we refer to as spacememory.
Since the stochastic model of modern cosmology assumes that order in the universe arises from random processes, the order we witness is highly improbable as this model neglects feedback-feedforward processes, from first principles. It is equivalent to the analogy given by Hoyle, of a blind person ordering the scrambled faces of a Rubik's cube. Nevertheless, randomness is the prevailing framework aiming to describe the quantum and cosmological order.
The only scenario where this feedback feedforward mechanism is allowed in conventional modeling is in biology, where the extremely high complexity could afford such an event. It is firmly believed that information exchange can only happen between biological entities, because they are the only ones with the necessary hardware. As reasonable as this argument seems, there is a prior question to it… how could this extraordinary complexity rise from randomness? Is it neglecting the scale-invariant complexity inherent to the holofractal order of the universe?
Note to the reader: this article is part of section 7.3, Module 7, of Resonance Academy Unified Science Course, available for free by registering in resonancescience.org.
Article by Dr. Ines Urdaneta, Physicist, and William Brown, Biophysicist, Research Scientists at Resonance Science Foundation
 DENNET, D. (1991). Consciousness Explained. Little, Brown and Co.
 DAWKINS, R. (1989). The Selfish Gene. Oxford: Oxford University Press.
 MALDACENA, J., & SUSSKIND, L. (2013). Cool Horizons for Blackholes. arXiv:1306.0533.
 HARAMEIN, N. Quantum Gravity and the Holographic Mass. (2013). Physical Review and Research International, pp. 270-292.
 FULLER, B., & APPLEWHITE, E. (1982). Explorations in the Geometry of Thinking: Synergetics. Macmillan Publishing Co. Inc