A theory of everything alreacy exists and that is my Cloud Cosmology which initiates by imagining the creation of the Space Time Pendulum. This produces a partially packed universe of such 3D particles and decay energy in the form of photons.
All this emulates the probalistic world nicely and is foundational to the quantum conceptualization.
a photonic loop can easily resonate any two axis outside of TIME.
We can imagine manipulation using photonic constructs tied to available axis.
A Theory of Everything That Explains Away The Paradoxes of Quantum Mechanics
Quantum mechanics is full of strange, paradoxical behavior. Now a small group of physicists think a more fundamental theory can make these paradoxes vanish.
Feb 15, 2022 5:05 AM
Credit: (agsandrew/Shutterstock)
One of the great triumphs of modern science is the theory of quantum mechanics, one of the most successful ideas in history. Every experiment ever done is compatible with its predictions and despite numerous attempts, physicists have never been able to create conditions in which it doesn’t work.
But quantum theory’s success forces physicists to accept a number of uncomfortable truths. For example, it allows “spooky action at a distance” between entangled particles. This occurs when two particles become so deeply linked that a measurement on one instantly determines the state of the other, regardless of the distance between them.
Since then, physicists have studied spooky-action-at-a-distance in detail; it is straightforward to observe in a quantum optics laboratory. It is now even exploited in technologies such as quantum cryptography.
Another uncomfortable conclusion is that the quantum universe is governed by probabilistic behavior. At any instant, lots of different things could happen but the thing that actually happens is determined by probability, essentially on the roll of a dice.
This thinking forces physicists to the conclusion that our deterministic experience of the universe is an illusion. Indeed, there is little debate among physicists that the foundation of reality is fundamentally and weirdly probabilistic.
Except among a small group of theoretical physicists led by the Nobel Prize winner, Gerard ‘t Hooft. For them, the idea of determinism – that one thing leads to another – is sacrosanct. They say the probabilistic properties of quantum mechanics can all be explained by a set of hidden laws that sit beneath the surface of quantum mechanics.
Nobody has observed these laws at work but that hasn’t stopped ‘t Hooft from trying to formulate what they must look like. And the stakes are high. He says that accessing these laws should lead to a theory of everything that solves many of the shortcomings that quantum physics currently cannot explain. Now, he outlines this approach, called superdeterminism, in a paper dedicated to Chen Ning Yang, another Nobel Prize winner, for his 100th birthday later this year.
Standard Model
First some background. The current theory of quantum mechanics that attempts to explain the nature of the universe is called the Standard Model of particle physics. And it has been hugely successful.
It describes the universe in terms of four fundamental forces and shows that three of these forces are different manifestations of the same thing. It has predicted the existence of numerous particles – seventeen in total – that experimentalists have gone on to discover using giant particle accelerators built for this specific purpose. This has been science at its most spectacular, resulting in numerous Nobel Prizes, not least for ‘t Hooft and Yang.
But in all this success, physicists have conveniently overlooked some of the Standard Model’s shortcomings. For example, the model predicts seventeen particles and depends on at least twenty different parameters, seemingly arbitrary numbers. “Any attempt to clean up this theory, in general results in producing more such parameters rather than less,” laments ‘t Hooft.
Neither is there any way to predict the strengths of various interactions between particles. Instead, the only way to find them is by careful, detailed measurement. That seems unsatisfactory to theorists.
At the heart of the problem, says ‘t Hooft, is the nature of quantum mechanics, that it dispenses with determinism to allow particles to become entangled, to exist in more than one place, to behave like waves and particles at the same time and so on.
By contrast, the theory of relativity is fundamentally deterministic. This is the other pillar of modern physics and its deterministic character appears to be fundamentally at odds with quantum theory. Nevertheless, any theory of everything must embrace them both.
Enter ‘t Hooft. His solution is to propose that beneath the surface, nature is fundamentally deterministic. This “superdeterminism” has profound implications. “Assuming an underlying model to be completely deterministic removes most of the ‘quantum paradoxes’ that were thought to be special to quantum mechanics alone,” he says.
For example, the ability of one entangled particle to influence another instantly must be an illusion. Superdeterminism suggests that the outcome is predetermined by another, deeper set of laws that are deterministic. But because we aren’t aware of these laws, the influence appears instantaneous.
Of course, this is a controversial idea. Physicists have long considered the possibility that quantum mechanics is incomplete, that it is missing a set of hidden variables that determine the outcome in experiments like this.
In the late 20th century, the physicist John Bell described a thought experiment to demonstrate this problem. He imagined a Professor Bertlsman who always wears socks of two different colors, say red and blue. Bertlsman always puts the socks on at random. So on any given day, there is no way to know whether he wears the blue sock on his left foot and the red sock on his other foot or vice versa.
Bertlsman's Socks
However, his students have worked out a clever way to predict the color of one sock without ever seeing it. As Bertlsman walks into the lecture theatre, they watch the foot that appears first to see what color sock it has.
Now imagine that moment. Before they see the sock there is no way to say which color each sock is. But as soon as the students see that the first sock is, say, blue, they instantly know that the other sock must be red. It’s almost as if the observation of one sock has determined the color of the other sock by spooky action at a distance.
Indeed, that’s how a naive observer might interpret the student’s uncanny ability to determine the color of the second sock. That is, until the naive observer discovers the hidden law of Bertlsman’s socks – that he always wears two different colors. Then it becomes clear there is no magic at work but instead, a hidden variable that makes this experiment entirely deterministic.
Bell went on to show that if quantum mechanics were governed in this way by hidden variables, there would be measurable consequences. Since then, physicists have hunted high and low for these consequences but their experiments have shown no evidence of hidden variables.
Most physicists interpret these experiments as proof that quantum mechanics cannot be governed by hidden variables and at first glance, this spells disaster for ‘t Hooft’s approach.
But he says there is a way through this quagmire. By his thinking, superdeterminism is so fundamental that it influences not just the particles that are being measured but the entire experimental set up, including the observers themselves.
That’s because all the particles and forces involved share the same history of the universe. This shared history essentially forces these experiments to appear paradoxical, as if there was spooky action at a distance, when they are actually deterministic. In other words, there is a loophole in Bell’s tests that allows the universe to trick us into thinking quantum mechanics is probabilistic.
‘t Hooft’s ideas are controversial but they promise much that the Standard Model cannot deliver – among them a theory of everything that reconciles relativity and quantum mechanics. He believes this can come about much in the same way as his colleague Yang laid the foundations for the Standard Model – through the study of symmetries, leading to the famous Yang-Mills field theory.
‘t Hooft’s ideas operate on an even smaller scale – the Planck length. This is so small that no current experiments can access it, which is why evidence is hard to get. But he believes that it is still possible to formulate a successful theory using a similar approach.
Basic Computing
By his own admission, ‘t Hooft is far from this point but he has begun to map out some of the features that his new theory must have. He says the universe on this level must work like a cellular automaton – a kind of computer that works out the value of all variables in the universe at a specific instant in time based on their values at the previous instant.
He says it is possible to derive models of this kind that behave probabilistically, like quantum mechanics, but are actually entirely deterministic beneath the surface. These models are not yet sophisticated enough to be thought of as theories of everything but they are proof of the principle ‘t Hooft relies on.
If any of this seems familiar, it is because ‘t Hooft is not the first to suggest that a cellular automaton can explain all the phenomenon in the universe. The physicist Stephen Wolfram has long championed this approach to physics, working independently of mainstream science. Among Wolfram’s successes is to show how simple deterministic cellular automaton can produce huge complexity. Recently, this blog covered the latest incarnation of Wolfram’s theory of everything based on this kind of approach. It is a beautiful idea.
Wolfram’s independent approach to science and other scientists has left him ploughing a lonely furrow in physics. ‘t Hooft’s position is not quite the same but his ideas are also controversial.
So here’s an idea that might be more inflammatory still--perhaps it’s time for ‘t Hooft and Wolfram to collaborate. They have been pursuing similar ideas for some time and may find some useful synergy. And with a theory of everything at stake, what is there to lose?
Credit: (agsandrew/Shutterstock)
One of the great triumphs of modern science is the theory of quantum mechanics, one of the most successful ideas in history. Every experiment ever done is compatible with its predictions and despite numerous attempts, physicists have never been able to create conditions in which it doesn’t work.
But quantum theory’s success forces physicists to accept a number of uncomfortable truths. For example, it allows “spooky action at a distance” between entangled particles. This occurs when two particles become so deeply linked that a measurement on one instantly determines the state of the other, regardless of the distance between them.
Since then, physicists have studied spooky-action-at-a-distance in detail; it is straightforward to observe in a quantum optics laboratory. It is now even exploited in technologies such as quantum cryptography.
Another uncomfortable conclusion is that the quantum universe is governed by probabilistic behavior. At any instant, lots of different things could happen but the thing that actually happens is determined by probability, essentially on the roll of a dice.
This thinking forces physicists to the conclusion that our deterministic experience of the universe is an illusion. Indeed, there is little debate among physicists that the foundation of reality is fundamentally and weirdly probabilistic.
Except among a small group of theoretical physicists led by the Nobel Prize winner, Gerard ‘t Hooft. For them, the idea of determinism – that one thing leads to another – is sacrosanct. They say the probabilistic properties of quantum mechanics can all be explained by a set of hidden laws that sit beneath the surface of quantum mechanics.
Nobody has observed these laws at work but that hasn’t stopped ‘t Hooft from trying to formulate what they must look like. And the stakes are high. He says that accessing these laws should lead to a theory of everything that solves many of the shortcomings that quantum physics currently cannot explain. Now, he outlines this approach, called superdeterminism, in a paper dedicated to Chen Ning Yang, another Nobel Prize winner, for his 100th birthday later this year.
Standard Model
First some background. The current theory of quantum mechanics that attempts to explain the nature of the universe is called the Standard Model of particle physics. And it has been hugely successful.
It describes the universe in terms of four fundamental forces and shows that three of these forces are different manifestations of the same thing. It has predicted the existence of numerous particles – seventeen in total – that experimentalists have gone on to discover using giant particle accelerators built for this specific purpose. This has been science at its most spectacular, resulting in numerous Nobel Prizes, not least for ‘t Hooft and Yang.
But in all this success, physicists have conveniently overlooked some of the Standard Model’s shortcomings. For example, the model predicts seventeen particles and depends on at least twenty different parameters, seemingly arbitrary numbers. “Any attempt to clean up this theory, in general results in producing more such parameters rather than less,” laments ‘t Hooft.
Neither is there any way to predict the strengths of various interactions between particles. Instead, the only way to find them is by careful, detailed measurement. That seems unsatisfactory to theorists.
At the heart of the problem, says ‘t Hooft, is the nature of quantum mechanics, that it dispenses with determinism to allow particles to become entangled, to exist in more than one place, to behave like waves and particles at the same time and so on.
By contrast, the theory of relativity is fundamentally deterministic. This is the other pillar of modern physics and its deterministic character appears to be fundamentally at odds with quantum theory. Nevertheless, any theory of everything must embrace them both.
Enter ‘t Hooft. His solution is to propose that beneath the surface, nature is fundamentally deterministic. This “superdeterminism” has profound implications. “Assuming an underlying model to be completely deterministic removes most of the ‘quantum paradoxes’ that were thought to be special to quantum mechanics alone,” he says.
For example, the ability of one entangled particle to influence another instantly must be an illusion. Superdeterminism suggests that the outcome is predetermined by another, deeper set of laws that are deterministic. But because we aren’t aware of these laws, the influence appears instantaneous.
Of course, this is a controversial idea. Physicists have long considered the possibility that quantum mechanics is incomplete, that it is missing a set of hidden variables that determine the outcome in experiments like this.
In the late 20th century, the physicist John Bell described a thought experiment to demonstrate this problem. He imagined a Professor Bertlsman who always wears socks of two different colors, say red and blue. Bertlsman always puts the socks on at random. So on any given day, there is no way to know whether he wears the blue sock on his left foot and the red sock on his other foot or vice versa.
Bertlsman's Socks
However, his students have worked out a clever way to predict the color of one sock without ever seeing it. As Bertlsman walks into the lecture theatre, they watch the foot that appears first to see what color sock it has.
Now imagine that moment. Before they see the sock there is no way to say which color each sock is. But as soon as the students see that the first sock is, say, blue, they instantly know that the other sock must be red. It’s almost as if the observation of one sock has determined the color of the other sock by spooky action at a distance.
Indeed, that’s how a naive observer might interpret the student’s uncanny ability to determine the color of the second sock. That is, until the naive observer discovers the hidden law of Bertlsman’s socks – that he always wears two different colors. Then it becomes clear there is no magic at work but instead, a hidden variable that makes this experiment entirely deterministic.
Bell went on to show that if quantum mechanics were governed in this way by hidden variables, there would be measurable consequences. Since then, physicists have hunted high and low for these consequences but their experiments have shown no evidence of hidden variables.
Most physicists interpret these experiments as proof that quantum mechanics cannot be governed by hidden variables and at first glance, this spells disaster for ‘t Hooft’s approach.
But he says there is a way through this quagmire. By his thinking, superdeterminism is so fundamental that it influences not just the particles that are being measured but the entire experimental set up, including the observers themselves.
That’s because all the particles and forces involved share the same history of the universe. This shared history essentially forces these experiments to appear paradoxical, as if there was spooky action at a distance, when they are actually deterministic. In other words, there is a loophole in Bell’s tests that allows the universe to trick us into thinking quantum mechanics is probabilistic.
‘t Hooft’s ideas are controversial but they promise much that the Standard Model cannot deliver – among them a theory of everything that reconciles relativity and quantum mechanics. He believes this can come about much in the same way as his colleague Yang laid the foundations for the Standard Model – through the study of symmetries, leading to the famous Yang-Mills field theory.
‘t Hooft’s ideas operate on an even smaller scale – the Planck length. This is so small that no current experiments can access it, which is why evidence is hard to get. But he believes that it is still possible to formulate a successful theory using a similar approach.
Basic Computing
By his own admission, ‘t Hooft is far from this point but he has begun to map out some of the features that his new theory must have. He says the universe on this level must work like a cellular automaton – a kind of computer that works out the value of all variables in the universe at a specific instant in time based on their values at the previous instant.
He says it is possible to derive models of this kind that behave probabilistically, like quantum mechanics, but are actually entirely deterministic beneath the surface. These models are not yet sophisticated enough to be thought of as theories of everything but they are proof of the principle ‘t Hooft relies on.
If any of this seems familiar, it is because ‘t Hooft is not the first to suggest that a cellular automaton can explain all the phenomenon in the universe. The physicist Stephen Wolfram has long championed this approach to physics, working independently of mainstream science. Among Wolfram’s successes is to show how simple deterministic cellular automaton can produce huge complexity. Recently, this blog covered the latest incarnation of Wolfram’s theory of everything based on this kind of approach. It is a beautiful idea.
Wolfram’s independent approach to science and other scientists has left him ploughing a lonely furrow in physics. ‘t Hooft’s position is not quite the same but his ideas are also controversial.
So here’s an idea that might be more inflammatory still--perhaps it’s time for ‘t Hooft and Wolfram to collaborate. They have been pursuing similar ideas for some time and may find some useful synergy. And with a theory of everything at stake, what is there to lose?
No comments:
Post a Comment