When your understanding of physics establishes that empirical infinity is a large number and that the inverse is a small number establishing the scaling system of the universe, it soon becomes impossible to securely observe what must be the foundation details.
It has been possible to image a neutron sort of. Yet that neutron could by constructed from 600 dark matter elements or so. Each dark matter element is additionally constructed from likely 1200 additional components. All while dropping the implied scale by a 1000 and then a million.
So far there is no plausible way to use our clumsy hardware see any of this and we may never be able to.
Good science starts with learning to collect observations such as they are to establish a phenomena. It continues through learning to study observers and to find many of them. From that it is possible to enhance your potential conjecture to something you can trust to test.
If it becomes impossible to collect data, then you must contrive blank sheet conjectures that you then learn to bound and test. This is what we really have with quantum theory and it has been a fruitful approach to the physical problem of seeing the physical at the scales involved. My Cloud cosmology is orthogonal to the quantum approach and thus allows me to start with creation itself and self assemble the universe to the point in which our observations become resolved.
Both are good science as they attack the observations in two separate directions of inquiry. Bad science takes to form of manipulating data to produce desired conclusions or outright ignoring the phenomena and bad mouthing it..
.
What is good science?
Demanding that a theory is falsifiable or observable, without any subtlety, will hold science back. We need madcap ideas
The Viennese physicist Wolfgang Pauli suffered from a guilty
conscience. He’d solved one of the knottiest puzzles in nuclear physics,
but at a cost. ‘I have done a terrible thing,’ he admitted to a friend
in the winter of 1930. ‘I have postulated a particle that cannot be
detected.’
Despite his pantomime of despair, Pauli’s letters reveal that he didn’t really think his new sub-atomic particle would stay unseen. He trusted that experimental equipment would eventually be up to the task of proving him right or wrong, one way or another. Still, he worried he’d strayed too close to transgression. Things that were genuinely unobservable, Pauli believed, were anathema to physics and to science as a whole.
Pauli’s views persist among many scientists today. It’s a basic principle of scientific practice that a new theory shouldn’t invoke the undetectable. Rather, a good explanation should be falsifiable – which means it ought to rely on some hypothetical data that could, in principle, prove the theory wrong. These interlocking standards of falsifiability and observability have proud pedigrees: falsifiability goes back to the mid-20th-century philosopher of science Karl Popper, and observability goes further back than that. Today they’re patrolled by self-appointed guardians, who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics as just so many castles in the sky. The cost of allowing such ideas into science, say the gatekeepers, would be to clear the path for all manner of manifestly unscientific nonsense.
But for a theoretical physicist, designing sky-castles is just part of the job. Spinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work. Some structures might be built up with great care over many years, and end up with peculiar names such as inflationary multiverse or superstring theory. Others are fabricated and dismissed casually over the course of a single afternoon, found and lost again by a lone adventurer in the troposphere of thought.
That doesn’t mean it’s just freestyle sky-castle architecture out there at the frontier. The goal of scientific theory-building is to understand the nature of the world with increasing accuracy over time. All that creative energy has to hook back onto reality at some point. But turning ingenuity into fact is much more nuanced than simply announcing that all ideas must meet the inflexible standards of falsifiability and observability. These are not measures of the quality of a scientific theory. They might be neat guidelines or heuristics, but as is usually the case with simple answers, they’re also wrong, or at least only half-right.
Despite his pantomime of despair, Pauli’s letters reveal that he didn’t really think his new sub-atomic particle would stay unseen. He trusted that experimental equipment would eventually be up to the task of proving him right or wrong, one way or another. Still, he worried he’d strayed too close to transgression. Things that were genuinely unobservable, Pauli believed, were anathema to physics and to science as a whole.
Pauli’s views persist among many scientists today. It’s a basic principle of scientific practice that a new theory shouldn’t invoke the undetectable. Rather, a good explanation should be falsifiable – which means it ought to rely on some hypothetical data that could, in principle, prove the theory wrong. These interlocking standards of falsifiability and observability have proud pedigrees: falsifiability goes back to the mid-20th-century philosopher of science Karl Popper, and observability goes further back than that. Today they’re patrolled by self-appointed guardians, who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics as just so many castles in the sky. The cost of allowing such ideas into science, say the gatekeepers, would be to clear the path for all manner of manifestly unscientific nonsense.
But for a theoretical physicist, designing sky-castles is just part of the job. Spinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work. Some structures might be built up with great care over many years, and end up with peculiar names such as inflationary multiverse or superstring theory. Others are fabricated and dismissed casually over the course of a single afternoon, found and lost again by a lone adventurer in the troposphere of thought.
That doesn’t mean it’s just freestyle sky-castle architecture out there at the frontier. The goal of scientific theory-building is to understand the nature of the world with increasing accuracy over time. All that creative energy has to hook back onto reality at some point. But turning ingenuity into fact is much more nuanced than simply announcing that all ideas must meet the inflexible standards of falsifiability and observability. These are not measures of the quality of a scientific theory. They might be neat guidelines or heuristics, but as is usually the case with simple answers, they’re also wrong, or at least only half-right.
Falsifiability
doesn’t work as a blanket restriction in science for the simple reason
that there are no genuinely falsifiable scientific theories. I can come
up with a theory that makes a prediction that looks falsifiable, but
when the data tell me it’s wrong, I can conjure some fresh ideas to plug
the hole and save the theory.
The history of science is full of
examples of this ex post facto intellectual engineering. In 1781,
William and Caroline Herschel discovered the planet Uranus. Physicists
of the time promptly set about predicting its orbit using Sir Isaac
Newton’s law of universal gravitation. But in the following decades, as
astronomers followed Uranus’s motion in its slow 84-year orbit around
the Sun, they noticed that something was wrong. Uranus didn’t quite move
as it should. Puzzled, they refined their measurements, took more and
more careful observations, but the anomaly didn’t go away. Newton’s
physics simply didn’t predict the location of Uranus over time.
But
astronomers of the day didn’t claim that the unexpected data falsified
Newtonian gravity. Instead, they proposed another explanation for the
strange motion of Uranus: something large and unseen was tugging on the
planet. Calculations showed that it would have to be another planet, as
large as Uranus and even farther from the Sun. In 1846, the French
astrophysicist Urbain Le Verrier predicted the location of this
hypothetical planet. Unable to get any French observatories interested
in the hunt, he sent the details of his prediction to colleagues in
Germany. That night, they pointed their telescopes where Le Verrier had
told them to look, and within half an hour they spotted the planet
Neptune. Newtonian physics, rather than being falsified, had been
fabulously vindicated – it had successfully predicted the exact location
of an entire unseen planet.
For years, the mystery of Mercury was unsolved, without any suggestion that Newton was wrong
Flush
with success, Le Verrier went after another planetary puzzle. Several
years after his discovery of Neptune, it became clear to him and other
astronomers that Mercury wasn’t moving as it was supposed to, either.
The point in its orbit where it made its closest approach to the Sun,
known as the perihelion, shifted a little more than Newton’s gravity
said it should each Mercurial year, adding up to 43 extra arcseconds (a
unit of angular measurement) over the course of a century. This is a
tiny amount – less than one-30,000th of a full orbit around the Sun –
but just as with Uranus before, the anomaly didn’t go away with
persistent observation. It stubbornly remained, defying the ghost of
Newton.
Once again, Newtonian gravity was not thrown out as
falsified – at least, not immediately. Instead, Le Verrier tried the
same trick again: pinning the anomaly on an unseen planet, a tiny rock
so close to the Sun that it had been missed by all other astronomers
throughout human history. He called the planet Vulcan, after the Roman
god of the forge. Le Verrier and others sought Vulcan for years, lugging
powerful telescopes to solar eclipses in an attempt to catch a glimpse
of the unseen planet in the brief minutes of totality while the Sun was
blocked by the Earth’s moon.
Le Verrier never found Vulcan. After
his death in 1877, the astronomy community gave up the search,
concluding that Vulcan simply wasn’t there. But even so, Newton’s
gravity wasn’t discarded. Instead, astronomers of the time collectively
shrugged and moved on. For years, the mystery of Mercury’s perihelion
was unsolved, without any serious suggestion that Newton was wrong.
Falsification was simply not on the menu.
Finally, in 1915, Albert
Einstein used his brand-new theory of general relativity to show that
he could succeed where Le Verrier had failed. General relativity was a
new account of how gravity worked, superseding Newtonian physics – and
it perfectly predicted the shift in the perihelion of Mercury. Einstein
said he was ‘beside himself with joy’ when he realised that his theory
could correctly solve this longstanding puzzle. Four years later, the
British astronomer Arthur Eddington and his team took their powerful
telescopes to an eclipse, not to hunt for Vulcan, but to confirm that
starlight bent around the Sun as Einstein’s theory had predicted. They
found that general relativity was right (though later investigations
suggested that their results were marred by errors, despite reaching the
correct conclusion); Einstein was instantly rocketed to fame as the man who had shown Newton wrong.
So
Newtonian gravity was ultimately thrown out, but not merely in the face
of data that threatened it. That wasn’t enough. It wasn’t until a viable alternative
theory arrived, in the form of Einstein’s general relativity, that the
scientific community entertained the notion that Newton might have
missed a trick. But what if Einstein had never shown up, or had been
incorrect? Could astronomers have found another way to account for the
anomaly in Mercury’s motion? Certainly – they could have said that
Vulcan was there after all, and was merely invisible to telescopes in
some way.
This might sound somewhat far-fetched, but again, the
history of science demonstrates that this kind of thing actually
happens, and it sometimes works – as Pauli found out in 1930. At the
time, new experiments threatened one of the core principles of physics,
known as the conservation of energy. The data showed that in a
certain kind of radioactive decay, electrons could fly out of an atomic
nucleus with a range of speeds (and attendant energies) – even though
the total amount of energy in the reaction should have been the same
each time. That meant energy sometimes went missing from these
reactions, and it wasn’t clear what was happening to it.
The
Danish physicist Niels Bohr was willing to give up energy conservation.
But Pauli wasn’t ready to concede the idea was dead. Instead, he came up
with his outlandish particle. ‘I have hit upon a desperate remedy to
save … the energy theorem,’ he wrote. The new particle could account for
the loss of energy, despite having almost no mass and no electric
charge. But particle detectors at the time had no way of seeing a
chargeless particle, so Pauli’s proposed solution was invisible.
Nonetheless,
rather than agreeing with Bohr that energy conservation had been
falsified, the physics community embraced Pauli’s hypothetical particle:
what came to be known as a ‘neutrino’ (the little neutral one), once
the Italian physicist Enrico Fermi refined the theory a few years later.
The happy epilogue was that neutrinos were finally observed in 1956,
with technology that had been totally unforeseen a quarter-century
earlier: a new kind of particle detector deployed in conjunction with a
nuclear reactor. Pauli’s ghostly particles were real; in fact, later
work revealed that trillions of neutrinos from the Sun pass through our
body every second, totally unnoticed and unobserved.
So invoking
the invisible to save a theory from falsification is sometimes the right
scientific move. Yet Pauli certainly didn’t believe that his particle
could never be observed. He hoped that it could be seen eventually, and
he was right. Similarly, Einstein’s general relativity was vindicated
through observation. Falsification just can’t be the answer, or at least
not the whole answer, to the question of what makes a good theory. What
about observability?
It’s certainly true that observation plays a
crucial role in science. But this doesn’t mean that scientific theories
have to deal exclusively in observable things. For one, the line
between the observable and unobservable is blurry – what was once
‘unobservable’ can become ‘observable’, as the neutrino shows.
Sometimes, a theory that postulates the imperceptible has proven to be
the right theory, and is accepted as correct long before anyone devises a
way to see those things.
Take the debate within physics in the
second half of the 1800s about atoms. Some scientists believed that they
existed, but others were deeply skeptical. Physicists such as Ludwig
Boltzmann in Austria, James Clerk Maxwell in the United Kingdom and
Rudolf Clausius in Germany were convinced by the chemical and physical
evidence that atomic theory was correct. Others, such as the Austrian
physicist Ernst Mach, were unimpressed.
Atoms were unobservable. Thus Mach condemned them as unreal and unnecessary
To
Mach, atoms were a wholly unnecessary hypothesis. After all, anything
that wasn’t observable couldn’t be considered a part of a good
scientific theory – in fact, such things couldn’t even be considered
real. To him, the archetype for a perfect scientific theory was
thermodynamics, the study of heat. This was a set of empirical laws
relating directly observable quantities such as the temperature,
pressure and volume of a gas. The theory was complete and perfect as it
was, and made no reference to anything unobservable at all.
But
Boltzmann, Maxwell and Clausius had worked hard to show that Mach’s
beloved thermodynamics was far from complete. Over the course of the
rest of the 19th century, they and others, such as the American
scientist Josiah Willard Gibbs, proved that the entirety of
thermodynamics – and then some – could be re-derived from the simple
assumption that atoms were real, and that all objects in everyday life
were composed of a phenomenal number of them. While it was impossible in
practice to predict the behaviour of every individual atom, in
aggregate their behaviour obeyed regular patterns – and because there
are so many atoms in everyday objects (way more than 100 billion billion
of them in a thimbleful of air), those patterns were never visibly
broken, even though they were the result only of statistical tendencies,
not ironclad laws.
The idea of demoting the laws of
thermodynamics to mere patterns was repugnant to Mach; invoking things
too small to be seen was even worse. ‘I don’t believe that atoms exist!’
he blurted out during a talk by Boltzmann in Vienna. Atoms were too
small to see even with the most powerful microscope that could possibly
be built at the time. Indeed, according to calculations carried out by
Maxwell and the Austrian scientist Josef Loschmidt, atoms were hundreds
of times smaller than the wavelength of visible light – and would thus
be forever hidden from view of any microscope relying on light waves.
Atoms were unobservable. Thus Mach condemned them as unreal and
unnecessary, extraneous to the practice of science.
Mach’s views
were enormously influential in his native Austria and elsewhere in
central Europe. His ideas led his compatriot Boltzmann to despair of
convincing the rest of the physics community that atoms were real; this
might have contributed to Boltzmann’s suicide in 1906. Yet physicists
who did subscribe to Mach’s ideas often found themselves stymied in
their work. Walter Kaufmann, a talented German experimental physicist,
found in 1897 that cathode rays (the kind of rays used inside old TVs
and computer monitors) had a constant ratio of charge to mass. But
rather than accepting that cathode rays might consist of small particles
with a fixed charge and mass, he heeded Mach’s warning not to postulate
anything unobservable, and remained silent on the subject. Months
later, the English physicist JJ Thomson found the same curious fact
about cathode rays. But Mach’s views were less popular in England, and
Thomson was comfortable suggesting the existence of a tiny particle that
comprised cathode rays. He called it the electron, and won the Nobel
Prize for its discovery in 1906 (as well as an eternal place in all
introductory physics and chemistry textbooks).
Mach’s
ideas certainly weren’t all bad; his writing inspired the young
Einstein in his early work on relativity. Mach’s influence also extended
to his godson, Pauli, the child of two fellow intellectuals in Vienna.
Mach’s ideas played a major role in Pauli’s early intellectual
development, and the words of his godfather were probably ringing in
Pauli’s ears when he first suggested the idea of the neutrino.
Unlike
Pauli, Einstein was not afraid of suggesting unobservable things. In
1905, the same year he published his theory of special relativity, he
proposed the existence of the photon, the particle of light, to an
unbelieving world. (He was not proven right about photons for nearly 20
years.) Mach’s ideas also inspired a vital movement in philosophy a
generation later, known as logical positivism – broadly speaking, the
idea that the only meaningful statements about the world were ones that
could be directly verified through observation. Positivism originated in
Vienna and elsewhere in the 1920s, and the brilliant ideas of the
positivists played a major role in shaping philosophy from that time to
the present day.
But what makes something ‘observable’? Are things
that can be seen only with specialised implements observable? Some of
the positivists said the answer was no, only the unvarnished data of our
senses would suffice – so things seen in microscopes were therefore not
truly real. But in that case, ‘we cannot observe physical things
through opera glasses, or even through ordinary spectacles, and one
begins to wonder about the status of what we see through an ordinary
windowpane,’ the philosopher Grover Maxwell wrote in 1962.
Furthermore,
Maxwell pointed out that the definition of what was ‘unobservable in
principle’ depends on our best scientific theories and full
understanding of the world, and so moves over time. Before the invention
of the telescope, for example, the idea of an instrument that could
make distant objects appear closer seemed impossible; consequently, a
planet too faint to be seen with the naked eye, such as Neptune, would
have been deemed ‘unobservable in principle’. Yet Neptune is undoubtedly
there – and we’ve not only seen it, we sent Voyager 2 there in 1989.
Similarly, what we consider unobservable in principle today might become
observable in the future with the advent of new physical theories and
observational technologies. ‘It is theory, and thus science itself,
which tells us what is or is not … observable,’ Maxwell wrote. ‘There
are no a priori or philosophical criteria for separating the observable from the unobservable.’
We use all of it, the observable and the unobservable, when we do science
Even
where theories propose identical observable outcomes, some are
provisionally accepted while others are flatly rejected. Say I publish a
theory stating that there are invisible microscopic unicorns with
flowing hair, spiralled horns and a taste for partial differential
equations; these unicorns are responsible for the randomness of the
quantum world, pushing and pulling subatomic particles to ensure that
they obey the Schrödinger equation, simply because they like that
equation more than any other. This theory is, by its nature, totally
observationally identical with quantum mechanics. But it is a profoundly
silly theory, and would (I hope) be rejected by all physicists were
someone to publish it.
Putting aside this glib example, the
choices we make between observationally identical theories have a big
impact upon the practice of science. The American physicist Richard
Feynman pointed out that two wildly different theories that have
identical observational consequences can still give you different
perspectives on problems, and lead you to different answers and
different experiments to conduct in order to discover the next theory.
So it’s not just the observable content of our scientific theories that
matters. We use all of it, the observable and the unobservable, when we
do science. Certainly, we are more wary about our belief in the
existence of invisible entities, but we don’t deny that the unobservable
things exist, or at least that their existence is plausible.
Some
of the most interesting scientific work gets done when scientists
develop bizarre theories in the face of something new or unexplained.
Madcap ideas must find a way of relating to the world – but demanding
falsifiability or observability, without any sort of subtlety, will hold
science back. It’s impossible to develop successful new theories under
such rigid restrictions. As Pauli said when he first came up with the
neutrino, despite his own misgivings: ‘Only those who wager can win.’
2 comments:
I'm thinking that at least half this article could be mooted by clarifying the distinction made in science between a theory and a hypothesis. In English, the term theory has a vaguer meaning and can promote exactly this confusion. Yes, we need wild ideas in science, but as hypotheses, not as theories. Theories are what hypotheses become once they have sufficient evidentiary support to be considered proven.
This article is a good illustration of the fact that new science is not created within the structure of science, but instead must involve ideas "outside the box" that science represents.
My father was Boris Podolsky, who predicted the discovery of "Quantum Entanglement" ("spooky action at a distance") in 1935 in a landmark paper with Einstein and Rosen. It was some 30 years before experimental technology caught up with theory and made the phenomenon "observable".
My father's explanation was that new science has to be "sufficiently crazy", by the standards of existing science, in order to have any chance of being a valuable addition to current scientific lore.
Bob Podolsky
borisheir@yahoo.com
Post a Comment