The system is what we have and it is stupid at times and this generates stupid results often enough to hurt us all.
I have shared here my ongoing investigation of the available data supporting the presence of the Giant Sloth in our woodlands. I now am comfortable that we have over one hundred separate eye witness reports. This has also produced an extensive list of conforming characteristics as well.
Intuitively, the probability of this creature's reality is approaching plus 95%. Generating an analysis that actually stated that would have to account for now established dependent variables that serve to actually strengthen the probability. The actual number of quality reports can give you say a 75% level. However, a dozen or so observations that also conform to fossil evidence must increase that probability to a much higher level.
Our advantage with the Giant Sloth is that no one actually identifies it as a Giant sloth. Thus our numbers are not easily influenced by observer bias. Not so true for the Thunder Bird and the Pterodactyl. No one mistakes them.
The second problem is that business knows all this and takes full advantage of controlling methodology to support internal bias for profit. It took centuries to cure the mining industry of assay fraud with millions of opportunities. How do we have a chance with drug fraud?
The Crisis of Modern Science
Abram N. Shulsky
The larger political and philosophic phenomenon that may be called “modernity” was the result of an effort, by and large successful, to bring about a fundamental change in human life—a change as significant as that brought about by the victory of Christianity in the fourth-century Roman Empire. This change was proposed by a group of thinkers in the 16th and 17th centuries, among the most prominent of whom were Niccolò Machiavelli, Francis Bacon, René Descartes, Thomas Hobbes, Benedict Spinoza, and John Locke.
This claim is admittedly astounding, and may well seem incredible to some. Indeed many scholars have rejected it. Various scholars have interpreted the thinkers noted above as being much less innovative in their thought than is implied by the notion of a “modern project.” Machiavelli, far from being a founder and promoter of “new modes and orders,” has been understood as striving for the revival of classical republicanism. The English editor of Bacon’s works, Thomas Fowler, saw him as “mid-way, as it were, between Scholasticism, on one side, and Modern Philosophy and Science, on the other.” John Locke has been seen, not as a bold innovator in the theory of natural law, but as a follower of “the judicious Hooker,” an Anglican theologian in the Thomistic tradition. This article won’t enter into this debate. However, regardless of what their intentions and self-understandings may have been, the changes in human life that have come into being since their time reflect much of what they wrote.
What was the modern project? In brief, it involved a new political approach aimed somewhat single-mindedly at security and prosperity, and a reformulation of human life on the basis of a new philosophic/scientific method aimed at increasing man’s power over nature. The latter became the triumph, however beleaguered and uncertain, of liberal democracy as a mode of governance. The former, which is most important for my purposes here, became modern science, with all the advances in technology and medicine it made possible.
The new science of nature can be regarded as new in at least two respects: a new approach, and a new goal. The new approach may be described as dogmatism based on skepticism: in other words, as Descartes explained his Discourse on Method, the proper procedure for science is to discard every idea and notion that can possibly be doubted and then build up a solid structure of knowledge of the basis of what remains—that is, what is indubitably true. Any conclusions that could be reached by means of this method, Descartes claimed, would necessarily be known as confidently as the proofs of geometry.
The comparison to geometry is not accidental: Whereas the philosophic schools of antiquity continued arguing with each other for centuries without reaching any sort of agreement, the truths of geometry were unchallenged. Thus, the geometric method—producing by strictly logical means air-tight conclusions based on seemingly indubitable first principles or axioms—recommended itself as the way out of the “scandal” of the schools, the constant debate among them that seemed to go nowhere.
The radicalism of this approach may be seen in Francis Bacon’s discussion, in The New Organon (henceforth, TNO), of the “idols” which he claims “are now in possession of the human understanding.” Our ordinary ability to understand nature is so deficient that basically nothing we believe can be trusted. We can’t begin with our ordinary notions and then seek to refine them: “the human understanding is like a false mirror, which, receiving rays irregularly, distorts and discolors the nature of things by mingling its own nature with it.”
To understand nature, we must do more than observe it and reflect on what we see. We must question it by means of carefully designed experiments and precisely record the answers it gives us: “all the truer kind of interpretation of nature is effected by instances and experiments fit and apposite; wherein the sense decides touching the experiment only, and the experiment touching the point in nature and the thing itself.”
Instead of relying on our ordinary observations of nature, we must discover a solid basis for founding a reliable structure of knowledge. That basis can’t come from (unreliably perceived) nature; it must be found in ourselves. We need an “Archimedean” point (the justification for using that phrase will become clear below) from which to begin our construction of the scientific edifice. As Hobbes notes, we can know with certainty only what we ourselves make.
As important as this change of approach is—science would no longer be the refinement and correction of common opinion but a humanly constructed structure of logically consistent propositions—the even more important innovation of modern science is its goal. As Bacon emphasizes in The Advancement of Learning (henceforth, AL), the biggest change he is advocating has to do with the purpose of science: Knowledge should be not “a couch whereupon to rest a searching and restless spirit; or a terrace for a wandering and variable mind to walk up and down with a fair prospect,” but rather “a rich storehouse . . . for the relief of man’s estate.” The goal of knowledge is no longer to enhance the good of the individual knower (by, for example, freeing him from superstitious terrors or satisfying his innate desire to know), but to “establish and extend the power and dominion of the human race itself over the universe” (TNO). Ancient philosophy’s failure to adopt this as the goal of its activity represents, according to Bacon, “the greatest error of all the rest” (AL) with which it may be charged.
For Bacon, the goal of human mastery of nature comes down to this: “On a given body, to generate and superinduce a new nature or new natures is the work and aim of human power” (TNO). In other words, humans would be able to change any substance into any other, or, for that matter, into a new, hitherto unknown substances that will have whatever qualities we want.
Although Descartes does not call attention to this point to the extent that Bacon does, he is in agreement with him: In Discours de la méthode he wrote that he felt compelled to publish his ideas once he saw that, as opposed the “speculative philosophy which is taught in the schools,” they could enable us to become “as masters and possessors of nature.”
The key to developing this kind of science is to focus on efficient causes: “Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced . . . that which in contemplation is as the cause is in operation as the rule” (TNO). By knowing the efficient causes of various effects, humans may be able to produce them—“artificially,” as we would say. This, of course, cannot be guaranteed, but, without knowing the efficient cause of something, it would be sheer luck if humans stumbled across a method of producing it.
Underlying this project is the assertion that “in nature nothing really exists besides individual bodies, performing pure individual acts according to a fixed law . . . the investigation, discovery, and explanation [of this law] is the foundation as well of knowledge as of operation” (TNO). These laws have nothing in common with the notion of “natures” in the Aristotelean sense. Bacon certainly understands the source of the Aristotelean understanding: “when man contemplates nature working freely1, he meets with different species of things, of animals, of plants, of minerals; whence he readily passes into the opinion that there are in nature certain primary forms which nature intends to educe” (TNO). However, as Bacon’s goal cited above makes clear, he regards Aristotelean “natures” as superficial; if his scientific project is successful, we will achieve, among other things, the alchemists’ dream of transmuting lead into gold—there is nothing about the “natures” of lead and gold which makes this impossible. (Indeed, from the point of view of modern physics, it is a matter of removing three protons and seven neutrons from each lead atom.)
However, the real causes of the phenomena we see are not visible to us unless we approach the problem methodically. Until we understand these real causes, we won’t be able to effect the changes we want: “For seeing that every natural action depends on things infinitely small, or at least too small to strike the sense, no one can hope to govern or change nature until he has duly comprehended and observed them” (TNO). In most cases, such “observation” can only be done by means of experiments, including the use of instruments which can detect these sub-microscopic events and reveal the results to us via counters, dials, and so forth.
Evolution of Modern Science
Despite Bacon’s importance for the development of modern science, it took several centuries before his vision began to take shape in reality. The first major development in science following the publication of Bacon’s works—Newton’s laws of motion and gravitation, which effectively did away with the notion that terrestrial and celestial objects were essentially different—didn’t require any investigation or understanding of the “infinitely small” bits of matter whose behavior, according to Bacon, underlie the observable phenomena. Similarly, it took a while before any technological innovations arose which depended on Baconian science—for example, the steam engines developed in the 18th and 19th centuries, which played such a big role in the industrial revolution, could be understood on the basis of pre-scientific common sense.
In the 19th century, however, with developments in areas such as electro-magnetism and chemistry, we began to enter a Baconian world in which the “secret springs” of nature were being understood and then harnessed by man for useful purposes, to produce effects of which common sense and naive observation would never have given us the smallest inkling. Bacon’s prediction, made centuries earlier, had been vindicated: After discussing the fortuitous discoveries of gunpowder, silk, and the magnet (he claimed no amount of speculation based on naive observation would have led men to suspect the existence of these items), he concluded: “There is therefore much ground for hoping that there are still laid up in the womb of nature many secrets of excellent use, having no affinity or parallelism with anything that is now known, but lying entirely out of the beat of the imagination, which have not yet been found out” (TNO).
Since those discoveries of the 19th century, the pace of scientific and technological progress has only accelerated. There is no need to catalogue all the ways in which scientific progress made possible the technologies that have changed our lives so much in the 20th and 21st centuries. While we can expect that this scientific progress will continue, in ways that will make real all sorts of technological possibilities, including some that we might regard today as redolent of science fiction, I believe we are at a point where we can fruitfully take stock of the modern scientific project and probe the challenges and paradoxes into which it is running—difficulties which, in hindsight at least, we can see were inherent in its initial structure and intention.
Divorce of Science from Philosophy
It is a commonplace to say that science became divorced (or, perhaps, emancipated) from philosophy in the modern period. From the scientific perspective, it is fair to say that philosophy is seen as a “handmaiden,” whose job it is to clear away any linguistic misunderstandings or puzzles, so that scientific progress can continue. At most, it can explain and justify the procedures scientists actually use, such as Karl Popper’s theory of falsification, which sought to “solve” the “problem of induction” as discussed by writers such as David Hume.
From the philosophic perspective, however, science may be characterized by its constrained ambition: It explicitly renounces any attempt to understand why we, or other beings, exist. As Bacon explained in his discussion of the “idols of the tribe,” the “human understanding” restlessly seeks for ultimate answers. We aren’t satisfied with “general principles in nature” (such as the laws of nature as science discovers them) but wish to attribute them to “something prior in the order of nature.” However, according to Bacon, we should treat the “general principles in nature” as “merely positive” (TNO); they cannot be referred to another or higher cause. In other words, according to Bacon, modern science begins with a “self-denying ordinance”—it cannot ask “ultimate” questions of the sort why we (or anything else) are here. That must be left to religion or philosophy, although Bacon himself admonishes that only an “unskilled and shallow philosopher [would] seek causes of that which is most general” (TNO).
This self-restraint of science need not, in itself, be the cause of a crisis. Most scientists probably accept the view that since science doesn’t deal with “ultimate” questions, it cannot opine on questions of religious belief—at least core religious beliefs such as the existence of God and an afterlife, the notion of God as the ultimate creator of all that is, and so forth.
It is true that some scientists—the “new atheists”—now claim that a refutation of religion is scientifically possible. To some extent, they produce arguments that religion is highly improbably and seek to conclude from that that it is impossible. But what religious believer ever thought that revelation was anything but miraculous—and thus improbable? In addition, they seek to give a “naturalistic,”—that is, evolutionary—account of the development of religious belief to counter the view that religious belief originated in revelation. We need not consider how compelling their accounts really are; of much greater theoretical importance is the argument that, if religious belief is the product of an evolutionary development process, it is hard to see why the same process does not explain philosophic beliefs as well, and ultimately their development of modern science. Thus science (despite what the new atheists assume) would have to allow that beliefs which evolve under evolutionary pressures can nevertheless be true.
Aside from the “new atheists,” many thoughtful scientists have looked to the very orderliness of nature—the fact that it obeys laws that can be expressed compactly and elegantly in mathematical formulae—for evidence that science has reached a level of fundamental truth. One of the 20th century’s leading physicists, Richard Feynman said in a lecture on the law of gravitation that he was “interested not so much in the human mind as in the marvel of a nature which can obey such an elegant and simple law as this law of gravitation.” This (mathematically) elegant and simple law allows the scientist to predict how hitherto unobserved phenomena will play themselves out. Feynman asks: “What is it about nature that lets this happen, that it is possible to guess from one part what the rest is going to do? That is an unscientific question: I do not know how to answer it, and therefore I am going to give an unscientific answer. I think it is because nature has a simplicity and therefore a great beauty.” Somehow, Feynman is able to understand the results of modern science as pointing to a truth, albeit an “unscientific” one.
Divorce from the World of Experience
In his famous Gifford lectures of 1927, the atomic physicist Arthur Eddington began by distinguishing between the “familiar” table, which is reassuringly solid and substantial, and the “scientific” table, which is “mostly emptiness. Sparsely scattered in that emptiness are numerous electric charges rushing about with great speed; but their combined bulk amounts to less than a billionth of the bulk of the table itself.”
According to Eddington, this divorce between the humanly perceived world and the world according to science was a new development, due to scientific progress in delving into the composition of the atom:
Until recently there was a much closer linkage; the physicist used to borrow the raw material of his world from the familiar world, but he does so no longer. His raw materials are aether, electrons, quanta, potentials, Hamiltonian functions, etc., . . . There is a familiar table parallel to the scientific table, but there is no familiar electron, quantum, or potential parallel to the scientific electron, quantum, or potential. . . . Science aims at constructing a world which shall be symbolic of the world of commonplace experience. It is not at all necessary that every individual symbol that is used should represent something in common experience or even something explicable in terms of common experience.
As we have seen, this development was “baked into the cake” from the Baconian beginnings of the modern scientific effort. But contemporary science does much more than question our ordinary understanding of the character of the objects that we encounter in daily life. Due to developments such as relativity and quantum mechanics, science also questions our most basic understanding of space and time, not to say logic.
This is much more disconcerting. It is one thing to say that the “real” characteristics of external objects (that is, their characteristics as science describes them) may be distorted in the process of our perceiving them, so that what we perceive is not necessarily what is “really” there. After all, we are familiar with examples of this in daily life: We know that, for example, what appears to us as water far off in the desert is in fact a mirage. However, our concepts of space and time don’t come from the external world but rather are those we use to understand it. When science tells us that these concepts (Euclidean geometry, the three-dimensional character of space, the “absolute” character of time) are wrong, we are at a loss. How can we imagine—as special relativity tells us—that space and time are part of a single space-time continuum, such that two simultaneous events separated by a given distance (as it appears to us) can be equally validly described by another observer as sequential events separated by a different distance? Or that space itself can be distorted, compressed or expanded, as general relativity tells us? Or, perhaps even more strangely, that a measurement made on a particle at one point can “instantaneously” affect the characteristics of another particle indefinitely far away—that is, that “non-locality,” which Einstein derided as “spooky action at a distance,” not only exists but has been experimentally verified.
As a result, the details of what, according to science, is “really” going on at any time bear no relation to the world with which we are familiar. Ultimately, just as Bacon suggested, the point of contact between the “real” world of science and the world with which we are acquainted is that the visible results of experiments agree with the predictions made on the basis of the scientific theories, regardless of the fact that the scientific theories themselves appear to describe physical situations that we cannot even imagine. Indeed, Your text to link here…: “I think I can safely say that nobody understands quantum mechanics.” As the flippant version of the reigning (“Copenhagen”) interpretation of quantum mechanics puts it: “Shut up and calculate.” The “real” world has indeed become, as Friedrich Nietzsche said, a fable.
None of this, however, need constitute a “crisis” of modern science, since none of this affects the intellectual and practical core of the enterprise. As long as the science progresses, throwing off new technological benefits as it does, it can maintain its intellectual credibility and the necessary material support. Bacon had already predicted that the science he was proposing would be comprehensible only to a scientific elite: “It cannot be brought down to common apprehension save by effects and works only” (TNO). Now, as the Feynman quip indicates, it may be only partially comprehensible even to that elite.
So if the increasing divorce of science from “truth” and “experience” is not an impediment to its further progress, what then is the issue? The assertion of this article is that there are two looming theoretical, if not practical, crises: one having to do with the goals of science, the other with its means. They both stem from an issue that was present at the birth of modern science but that is only now on the cusp of making itself felt: the fact that modern science regards man as having two roles. As a scientist, man is the analyzer (and hence potential or actual manipulator) of nature, whereas, as a member of the animal kingdom, he is as much a part of nature as any animate or, for that matter, inanimate object, to be understood (and even manipulated) according to the same principles and processes.
Although man’s dual role as the manipulator and the manipulated was evident at the beginning of modern science, it was originally a purely speculative matter. For all practical purposes, “human nature” could be taken as a given. Man was only theoretically, not practically, able to manipulate his own nature.
This began to change in the late 19th and 20th centuries, when movements such as eugenics proposed to “improve” human beings as a species, using supposed scientific knowledge to identify those who should be encouraged to reproduce, and those who should be discouraged or even prevented. The “scientific” basis of this movement rested on some very simplistic notions about genetics: For example, Charles Davenport, the biologist who founded the American eugenics movement in 1898, “believed that complex human traits were controlled by single genes and therefore inherited in a predictable pattern.”
In the mid-20th century, American psychologist B. F. Skinner proposed the science of behavioralism, or operant conditioning, as a means of improving human nature. In his novel Walden Two, Skinner described a utopian society in which operant conditioning has successfully molded citizens’ behavior. In the Soviet Union, the even more ambitious task of creating a “new Soviet man” was proposed. In an extreme statement, Leon Trotsky wrote that, under socialism, man
will try to master first the semiconscious and then the subconscious processes in his own organism, such as breathing, the circulation of the blood, digestion, reproduction, and, within necessary limits, he will try to subordinate them to the control of reason and will. Even purely physiologic life will become subject to collective experiments. The human species, . . . in his own hands, will become an object of the most complicated methods of artificial selection and psychophysical training.
This ambition was subsequently bolstered by the Soviet adoption of the pseudo-science of Lysenkoism, according to which acquired characteristics (which do not affect an individual’s genetic make-up) could nevertheless be transmitted to progeny. This, according to one scholar, was “the essential magic key that would open up the possibility of reshaping man and creating the New [Soviet] Man.”
These 20th-century attempts to manipulate human nature rested on scientific bases that can now easily be seen as laughably inadequate. The technologies underlying the society of Aldous Huxley’s Brave New World__—the “fine-tuning” of __in vitro fertilization so as to produce castes of human beings with predictably different mental and physical abilities, as well as the existence of a drug that produced temporary euphoria with no “hangover” or other negative consequences—remained safely in the realm of science fiction.
However, one has to wonder whether, given the tremendous recent progress in genetics and neuroscience, we can be confident that this will remain the case in the present century. If not, then the ability to manipulate “human nature”—sought in vain by the visionaries of the past—may become thinkable.
This is not the place for a review of the status of genetics and neuroscience, and what their prospects are for the remainder of this century and beyond. Progress in practical aspects of genetics has been very rapid, and it is becoming possible to “personalize” medical procedures and cures according to a patient’s genetic make-up. Using a new genetic engineering technique (CRISPR-Cas9), “researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS.” At the same time, it has become clear that almost all relevant human characteristics, as well as susceptibility to most diseases, depends on the complex interaction of many different genes and environmental factors; in other words, we are a long way from knowing which genes should be altered, and how, in order to produce “designer” babies with increased intelligence, athletic or artistic virtuosity, or whatever characteristics their rich parents may desire.
Progress in neuroscience has been equally rapid. New imaging techniques have increased our knowledge of how the brain functions and which of its parts are responsible for specific mental activities. The discovery of neurotransmitters such as serotonin, and the increased understanding of how they function in the brain, has enabled the development of such psychotherapeutic drugs as Zoloft, Paxil, and Prozac. Nevertheless, as one British researcher has concluded, “modern neuroscience research has, as yet, had minimal impact on mental health practice” although he goes on to predict that “we are on the brink of an exciting period.”
In short, in both these crucial areas, it appears that we have, in the past decades, been accumulating basic knowledge and improving techniques at a rapid pace, but that the major pay-offs are still, by and large, in the future. Of course, we cannot be certain that even these recent advances will be enough to support the ambitious objectives that have been posited. Perhaps, to scientists of the 22nd century genetics and neuroscience will appear as inadequate to the task of manipulating human nature as eugenics and Lysenkoism do to us now.
But what if genetics and neuroscience really do have the potential that their advocates believe? Under this assumption, the consequences may show up in at least two ways: with respect to the goals of the scientific enterprise, and with respect how it understands its own functioning.
Science and the Human Good
As for the goals of science, we have noted the statements of Bacon and Descartes to the effect that the goal of science is to increase man’s power over nature. But these famous formulae are less precise about man himself and what constitutes his good. In particular, what are the good things for man that science will enable us to procure?
To some extent, of course, this question can be dismissed as unimportant. The abolition of hunger, the improvement of health, the invention or discovery of new and improved products for our convenience, comfort and amusement—all these things can be easily accepted as good for man without any need to philosophize about them. Underlying this easy acceptance is, however, our belief that we know what man is, and that we can accept as given our notion of what is good for him. Thus, regardless of the adoption of the “fact-value” distinction by contemporary social science, one might think that the modern natural scientific enterprise—in its role as the ultimate source of technological advances—would be justifiable only if it knew something about the human good.
However, it explicitly denies that it possesses any such knowledge. It contents itself with producing new technological possibilities and then is silent about whether these possibilities will be used for good, to say nothing of how to increase the chances that they might be. Thus when science gives rise to inventions, the potential of which for evil is manifest (nuclear weapons being the standard example), it has nothing, as science, to say. Given the division of humanity into competing nations, weapons can be developed even if everyone believes that their development is bad for mankind as a whole.
However, the advances on the horizon, which will increase human power over human “nature” itself, raise a much more fundamental question. How can science be for the good of man if it can change man and his “nature”? Would not a change in human “nature” change what is good for him? More fundamentally, is there any clear standard by which one could judge which changes in human “nature” are beneficial for human beings?
For example, liberal democracies hold that the opportunity freely to express one’s opinions and espouse one’s religious beliefs is something most men want and is, in fact, good for them. Thus political (and to some extent, technological) arrangements that facilitate this are to be favored. But could one not imagine the development of human beings, by genetic or other means, who would not feel such wants? Indeed Aldous Huxley’s Brave New World explicitly imagines this possibility. Of course, we citizens of liberal democracies are horrified at the thought of such things. But, in other regimes, the powers-that-be might find it an extremely attractive prospect, and they could argue that human beings who did not strongly care about their opinions and beliefs would be less likely to fight over them, making for a more harmonious society. They would argue that the liberal democratic position was just based on an unreasoning conservatism, a mindless preference for the way things have been rather than the way they could be.
Is this a problem for science itself? One result of this is that the goal for science could no longer be said to be the achievement of dominion over nature for the good of man, but instead the achievement of dominion over nature simply. To the extent that this power is used to manipulate man himself, the question would be, to whom are the scientists and technologists responsible? This has been a vexed question, in any case. But for the past several centuries, certain aspects of human nature have tended to favor the victory of liberal democracy over the course of the centuries. Briefly, despite its weaknesses, liberal democracy has given most of the people most of what they want most of time, which accounts for its relative strength and stability. But this also rests on certain other characteristics of human nature that on occasion motivate people to run great risks in the fight for freedom.2 If those characteristics can be manipulated, who is to say that other forms of government cannot be made stronger and more stable? We would not regard a science that serves to strengthen tyrannical forms of government—no matter how benevolent (along the lines of Brave New World) they could claim to be—as operating for the “relief of man’s estate.” But if human nature were suitably altered—that is, tamed—it is not clear why anyone would object to such a tyranny.
Furthermore, the “good of man” would have always been understood in the past to be the good of the human species, including those members yet to be born. Indeed, one could argue that science saw itself as more in the interest of future generations than of the present one, since the notion of scientific progress (the accumulation of new knowledge and hence new power) implies that future generations will have more technology at their disposal than we do. Genetic engineering, however, carried to an unlikely but not unimaginable extreme, would imply that the current generation is in a position to determine the characteristics of future generations.
We could perhaps, for example, make them like Nietzsche’s “last men”—essentially contented beings with no ambition or longing. Presumably, this would be done on the basis that this would make future generations happier than we are—or at least more contented. It could also be argued that the existence of weapons that could destroy all human life implies a need to make human beings massively less bellicose: as Bertrand Russell wrote, even before the development of nuclear weapons,
Science increases our power to do both good and harm, and therefore enhances the need for restraining destructive impulses. If a scientific world is to survive, it is therefore necessary that men should become tamer than they have been.
Alternatively, if science should clear the way to the fulfillment of perhaps our fondest wish—indefinite continuance in life—we might decide to dispense with future generations altogether, or to create them with characteristics that we prefer but that might hinder their ability to live autonomous lives (for example, we might engineer them to be content to cater to our needs and desires ahead of their own).
Science and Reason
Notwithstanding all of the above discussion, it seems clear that the scientific enterprise can continue to function even if it is no longer able to show that it functions for the sake of the human good, or even if we no longer understand what that might mean. What it cannot dispense with is reasoning or, more to the point, its reliance on the human ability to reason. As long as science knew basically nothing about the human brain and its functioning, this dependency was unproblematic. One could simply assume, as the early modern writers did, that human beings somehow had the ability in general to reason correctly (and to recognize and correct any errors in reasoning that they might make).
The more we know about how the brain functions, the more we are able to correlate the subjective experience of reasoning with chemical and electrical activity in specific parts of the brain. At this point, however, we come across certain conundrums that are difficult to understand, let alone resolve.
Unlike a computer, which is designed and programmed from the start to accomplish a certain set of tasks, the human brain (as understood by modern science) presumably developed gradually in response to evolutionary pressures over the long pre-agricultural period during which current-day Homo sapiens came into being; it had to enable its possessor to acquire sustenance via hunting and gathering and to navigate the inter-personal relationships of the troop to which he or she belonged.
Evolutionary theory recognizes that certain characteristics may develop, not because they contribute to the survival and reproduction of the organism in question, but rather as chance byproducts of characteristics that do. Presumably, the human ability to engage in abstract mathematical reasoning (for example, about prime numbers) would have to fit into this category; it is difficult to see how our ability to discover and understand a proof of the theorem that there are an infinite number of prime numbers could have enhanced our fitness to survive and reproduce during the period in which we evolved into our present state. (Indeed, it is hard to see why evolution would select for such a characteristic even now.)
This, in itself, may not be a difficulty. We could simply accept our ability to engage in mathematics and modern science as a whole as an inexplicable “gift” of nature. However, there is a deeper problem lurking here. If we knew that some object superficially resembling an adding machine had been developed for a different purpose, but that, as a byproduct of it serving that purpose, it turned out to be possible to enter a group of numbers into the machine and get another number back as an output, why would we ever trust that the output number represented the actual sum of the numbers we entered? Indeed, the situation is even worse than that. While our brains allow us to add up numbers, we sometimes make mistakes.3 When that happens, we are able—if we make the effort—to check our work and correct our mistake. Despite our vulnerability to making mistakes in arithmetic, we are somehow able to correct them and arrive at an answer that we can know with certainty to be correct.
Science currently possesses no clear explanation for how this is possible, and a strong case can be made that, on a materialist/Darwinian basis, it will never be able to. In Mind and Cosmos, Thomas Nagel points to three human phenomena which he claims cannot be explained within the modern scientific framework: consciousness, reasoning, and morality. But whereas modern science can view consciousness as epiphenomenal and morality as a cultural artifact with no scientific validity (for example, the fact-value distinction), it cannot dispense with reasoning; our ability to reason correctly for the most part, and, more importantly, to recognize with certainty correct reasoning when it is pointed out to us, is essential for the scientific enterprise.
As long as this human ability could be taken as a given, this didn’t pose any problems. As we begin to understand the brain and its functioning in greater and greater detail, and hence, presumably, begin to acquire the capability of affecting its functioning in ways that we chose, the paradox becomes more evident. Can we alter the human brain so as to give it new ways of “reasoning” of which it is currently incapable? Should we trust those new methods if we could?
The modern project, with respect to both politics and science, is prospering as never before, but its philosophical underpinnings appear weak. Liberal democracy, the Enlightenment’s most successful (but not only) political child, has proven able to satisfy sufficient human wants to give it the strength to combat successfully (so far, at least) the challenges that constantly arise against it. Modern science goes from triumph to triumph. To the extent that it can no longer claim to be unambiguously good for humanity (the development of nuclear weapons made that point clear to all), its practical position has been bolstered by the fact that no society can afford to fall far behind the scientific frontier if it wishes to safeguard its independence. Thus, as the recent detection of gravitational waves reminds us, science is still able to command vast resources necessary for its work, regardless of the absence of any prospect of near-term benefit to society as a whole.
Nevertheless, the rapid progress in areas such as genetics and neuroscience, which promise an increase in the scientific understanding of human beings and, among other things, their cognitive functions, means that the perplexities resulting from the initial dualism between man as the subject of study and man as the studier are likely to become more prominent.
1 The notion of nature “working freely” is an important one for Bacon; it refers to the phenomena we meet with in the course of our lives, as opposed to what we can observe by a carefully constructed and instrumented experiment. It is only the latter, according to Bacon (and to modern science) that can reveal to us the “secret springs.” Bacon would have understood the notion of “phenomenology” as the observation of nature “working freely”: he rejected it avant la lettre, as it were. ↝
2 See, for example, the long discussion of the “struggle for recognition” in Frank Fukuyama, The End of History and the Last Man (The Free Press, 1992), pp. 143ff. Or consider the closing words of the American Declaration of Independence in which the signers “mutually pledge to each other [their] lives, [their] fortunes and [their] sacred honor.” Thus, to secure a government based on the protection of individual rights, they were willing to risk their lives and fortunes; but they intended to preserve their “sacred honor.” ↝
3 Strictly speaking, our brains, assuming, as science does, that they are natural objects working according to the laws of nature, cannot be said to make mistakes, any more than a watch which does not keep time accurately makes mistakes—such a watch operates according to the laws of nature, just as a “good” watch does. But “we” make mistakes all the time. ↝
Corbett • 02/23/2019 • .
In recent years, the public has gradually discovered that there is a crisis in science. But what is the problem? And how bad is it, really? Today on The Corbett Report we shine a spotlight on the series of interrelated crises that are exposing the way institutional science is practiced today, and what it means for an increasingly science-dependent society.
For those with limited bandwidth, CLICK HERE to download a smaller, lower file size version of this episode.
For those interested in audio quality, CLICK HERE for the highest-quality version of this episode (WARNING: very large download).
In 2015 a study from the Institute of Diet and Health with some surprising results launched a slew of click bait articles with explosive headlines:
“Chocolate accelerates weight loss” insisted one such headline.
“Scientists say eating chocolate can help you lose weight” declared another.
“Lose 10% More Weight By Eating A Chocolate Bar Every Day…No Joke!” promised yet another.
There was just one problem: This was a joke.
The head researcher of the study, “Johannes Bohannon,” took to io9 in May of that year to reveal that his name was actually John Bohannon, the “Institute of Diet and Health” was in fact nothing more than a website, and the study showing the magical weight loss effects of chocolate consumption was bogus. The hoax was the brainchild of a German television reporter who wanted to “demonstrate just how easy it is to turn bad science into the big headlines behind diet fads.”
Given how widely the study’s surprising conclusion was publicized—from the pages of Bild, Europe’s largest daily newspaper to the TV sets of viewers in Texas and Australia—that demonstration was remarkably successful. But although it’s tempting to write this story off as a demonstration about gullible journalists and the scientific illiteracy of the press, the hoax serves as a window into a much larger, much more troubling story.
That story is The Crisis of Science.
This is The Corbett Report.
What makes the chocolate weight loss study so revealing isn’t that it was completely fake; it’s that in an important sense it wasn’t fake. Bohannes really did conduct a weight loss study and the data really does support the conclusion that subjects who ate chocolate on a low-carb diet lose weight faster than those on a non-chocolate diet. In fact, the chocolate dieters even had better cholesterol readings. The trick was all in how the data was interpreted and reported.
As Bohannes explained in his post-hoax confession:
“Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a ‘statistically significant’ result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.”
You see, finding a “statistically significant result” sounds impressive and helps scientists to get their paper published in high-impact journals, but “statistical significance” is in fact easy to fake. If, like Bohannes, you use a small sample size and measure for 18 different variables, it’s almost impossible not to find some “statistically significant” result. Scientists know this, and the process of sifting through data to find “statistically significant” (but ultimately meaningless) results is so common that it has its own name: “p-hacking” or “data dredging.”
But p-hacking only scrapes the surface of the problem. From confounding factors to normalcy bias to publication pressures to outright fraud, the once-pristine image of science and scientists as an impartial font of knowledge about the world has been seriously undermined over the past decade.
Although these types of problems are by no means new, they came into vogue when John Ioannidis, a physician, researcher and writer at the Stanford Prevention Research Center, rocked the scientific community with his landmark paper “Why Most Published Research Findings Are False.” The 2005 paper addresses head on the concern that “most current published research findings are false,” asserting that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” The paper has achieved iconic status, becoming the most downloaded paper in the Public Library of Science and launching a conversation about false results, fake data, bias, manipulation and fraud in science that continues to this day.
JOHN IOANNIDIS: This is a paper that is practically presenting a mathematical modeling of what are the chances that a research finding that is published in the literature would be true. And it uses different parameters, different aspects, in terms of: What we know before; how likely it is for something to be true in a field; how much bias are maybe in the field; what kind of results we get; and what are the statistics that are presented for the specific result.
I have been humbled that this work has drawn so much attention and people from very different scientific fields—ranging not just bio-medicine, but also psychological science, social science, even astrophysics and the other more remote disciplines—have been attracted to what that paper was trying to do.
SOURCE: John Ioannidis on Moving Toward Truth in Scientific Research
Since Ioannidis’ paper took off, the “crisis of science” has become a mainstream concern, generating headlines in the mainstream press like The Washington Post, The Economist and The Times Higher Education Supplement. It has even been picked up by mainstream science publications like Scientific American, Nature and phys.org.
So what is the problem? And how bad is it, really? And what does it mean for an increasingly tech-dependent society that something is rotten in the state of science?
To get a handle on the scope of this dilemma, we have to realize that the “crisis” of science isn’t a crisis at all, but a series of interrelated crises that get to the heart of the way institutional science is practiced today.
First, there is the Replication Crisis.
This is the canary in the coalmine of the scientific crisis in general because it tells us that a surprising percentage of scientific studies, even ones published in top-tier academic journals that are often thought of as the gold standard for experimental research, cannot be reliably reproduced. This is a symptom of a larger crisis because reproducibility is considered to be a bedrock of the scientific process.
In a nutshell, an experiment is reproducible if independent researchers can run the same experiment and get the same results at a later date. It doesn’t take a rocket scientist to understand why this is important. If an experiment is truly revealing some fundamental truth about the world then that experiment should yield the same results under the same conditions anywhere and at any time (all other things being equal).
Well, not all things are equal.
In the opening years of this decade, the Center for Open Science led a team of 240 volunteer researchers in a quest to reproduce the results of 100 psychological experiments. These experiments had all been published in three of the most prestigious psychology journals. The results of this attempt to replicate these experiments, published in 2015 in a paper on “Estimating the Reproducibility of Psychological Science,” were abysmal. Only 39 of the experimental results could be reproduced.
Worse yet for those who would defend institutional science from its critics, these results are not confined to the realm of psychology. In 2011, Nature published a paper showing that researchers were only able to reproduce between 20 and 25 per cent of 67 published preclinical drug studies. They published another paper the next year with an even worse result: researchers could only reproduce six of a total of 53 “landmark” cancer studies. That’s a reproducibility rate of 11%.
These studies alone are persuasive, but the cherry on top came in May 2016 when Nature published the results of a survey of over 1,500 scientists finding fully 70% of them had tried and failed to reproduce published experimental results at some point. The poll covered researchers from a range of disciplines, from physicists and chemists to earth and environmental scientists to medical researchers and assorted others.
So why is there such a widespread inability to reproduce experimental results? There are a number of reasons, each of which give us another window into the greater crisis of science.
The simplest answer is the one that most fundamentally shakes the widespread belief that scientists are disinterested truthseekers who would never dream of publishing a false result or deliberately mislead others.
JAMES EVAN PILATO: Survey sheds light on the ‘crisis’ rocking research.
More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature’s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.
The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.
Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology1 and cancer biology2, found rates of around 40% and 10%, respectively.
So the headline of this article, James, that we grabbed from our buddy Doug at BlackListed News: “40 percent of scientists admit that fraud is always or often a factor that contributes to irreproducible research.”
SOURCE: Scientists Say Fraud Causing Crisis of Science – #NewWorldNextWeek
In fact, the data shows that the Crisis of Fraud in scientific circles is even worse than scientists will admit. A study published in 2012 found that fraud or suspected fraud was responsible for 43% of scientific paper retractions, by far the single leading cause of retraction. The study demonstrated a 1000% increase in (reported) scientific fraud since 1975. Together with “duplicate publication” and “plagiarism,” misconduct of one form or another accounted for two-thirds of all retractions.
So much for scientists as disinterested truth-tellers.
Indeed, instances of scientific fraud are cropping up more and more in the headlines these days.
Last year, Kohei Yamamizu of the Center for iPS Cell Research and Application was found to have completely fabricated the data for his 2017 paper in the journal Stem Cell Reports, and earlier this year it was found that Yamamizu’s data fabrication was more extensive than previously thought, with a paper from 2012 also being retracted due to doubtful data.
Another Japanese researcher, Haruko Obokata, was found to have manipulated images to get her landmark study on stem cell creation published in Nature. The study was retracted and one of Obokata’s co-authors committed suicide when the fraud was discovered.
Similar stories of fraud behind retracted stem cell papers, molecular-scale transistor breakthroughs, psychological studies and a host of other research calls into question the very foundations of the modern system of peer-reviewed, reproducible science, which is supposed to mitigate fraudulent activity by carefully checking and, where appropriate, repeating important research.
There are a number of reasons why fraud and misconduct is on the rise, and these relate to more structural problems that unveil yet more crises in science.
Like the Crisis of Publication.
We’ve all heard of “publish or perish” by now. It means that only researchers who have a steady flow of published papers to their name are considered for the plush positions in modern-day academia.
This pressure isn’t some abstract or unstated force; it is direct and explicit. Until recently the medical department at London’s Imperial College told researchers that their target was to “publish three papers per annum including one in a prestigious journal with an impact factor of at least five.” Similar guidelines and quotas are enacted in departments throughout academia.
And so, like any quota-based system, people will find a way to cheat their way to the goal. Some attach their names to work they have little to do with. Others publish in pay-to-play journals that will publish anything for a small fee. And others simply fudge their data until they get a result that will grab headlines and earn a spot in a high-profile journal.
It’s easy to see how fraudulent or irreproducible data results from this pressure. The pressure to publish in turn puts pressure on researchers to produce data that will be “new” and “unexpected.” A study finding that drinking 5 cups of coffee a day increases your chance of urinary tract cancer (or decreases your chance of stroke) is infinitely more interesting (and thus publishable) than a study finding mixed results, or no discernible effect. So studies finding a surprising result (or ones that can be manipulated into showing surprising results) will be published and those with negative results will not. This makes it much harder for future scientists to get an accurate assessment of the state of research in any given field, since untold numbers of experiments with negative results never get published, and thus never see the light of day.
But the pressure to publish in high-impact, peer-reviewed journals itself raises the specter of another crisis: The Crisis of Peer Review.
The peer review process is designed as a check against fraud, sloppy research and other problems that arise when journal editors are determining whether to publish a paper. In theory, the editor of the journal passes the paper to another researcher in the same field who can then check that the research is factual, relevant, novel and sufficient for publication.
In practice, the process is never quite so straightforward.
The peer review system is in fact rife with abuse, but few cases are as flagrant as that of Hyung-In Moon. Moon was a medicinal-plant researcher at Dongguk University in Gyeongju, South Korea, who aroused suspicions by the ease with which his papers were reviewed. Most researchers are too busy to review other papers at all, but the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry noticed that the reviewers for Moon’s papers were not only always available, but that they usually submitted their review notes within 24 hours. When confronted by the editor about this suspiciously quick work, Moon admitted that he had written most of the reviews himself. He had simply gamed the system, where most journals ask researchers to submit names of potential reviewers for their papers, by creating fake names and email addresses and then submitting “reviews” of his own work.
Beyond the incentivization of fraud and opportunities for gaming the system, however, the peer review process has other, more structural problems. In certain specialized fields there are only a handful of scientists qualified to review new research in the discipline, meaning that this clique effectively forms a team of gatekeepers over an entire branch of science. They often know each other personally, meaning any new research they conduct is certain to be reviewed by one of their close associates (or their direct rivals). This “pal review” system also helps to solidify dogma in echo chambers where the same few people who go to the same conferences and pursue research along the same lines can prevent outsiders with novel approaches from entering the field of study.
In the most egregious cases, as with researchers in the orbit of the Climate Research Unit at the University of East Anglia, groups of scientists have been caught conspiring to oust an editor from a journal that published papers that challenged their own research and even conspiring to “redefine what the peer-review literature is” in order to stop rival researchers from being published at all.
So, in short: Yes, there is a Replication Crisis in science. And yes, it is caused by a Crisis of Fraud. And yes, the fraud is motivated by a Crisis of Publication. And yes, those crises are further compounded by a Crisis of Peer Review.
But what creates this environment in the first place? What is the driving factor that keeps this whole system going in the face of all these crises? The answer isn’t difficult to understand. It’s the same thing that puts pressure on every other aspect of the economy: funding.
Modern laboratories investigating cutting edge questions involve expensive technology and large teams of researchers. The types of labs producing truly breakthrough results in today’s environment are the ones that are well funded. And there are only two ways for scientists to get big grants in our current system: big business or big government. So it should be no surprise that “scientific” results, so suspectible to the biases, frauds and manipulations that constitute the crises of science, are up for sale by scientists who are willing to provide dodgy data for dirty dollars to large corporations and politically-motivated government agencies.
RFK JR.: “Simpsonwood” was the transcripts of a secret meeting that was held between CDC and 75 representatives of the vaccine industry in which they reviewed a report that CDC had ordered—the Verstraeten study—of a hundred thousand children in the United States vaccine safety database. And when they looked at it themselves, they said, quote: “It is impossible to massage this data to make the signal go away. There is no denying that there is a connection between autism and thimerosal in the vaccines.” And this is what they said. I didn’t say this. This is what their own scientists [said] and their own conclusion of the best doctors, the top people at CDC, the top people at the pharmaceutical injury industry.
And you know, when they had this meeting they had it not in Atlanta—which was the headquarters of the CDC—but in Simpsonwood at a private conference center, because they believed that that would make them able to insulate themselves from a court request under the Freedom of Information Law and they would not have to disclose the transcripts of these meetings to the public. Somebody transcribed the meetings and we were able to get a hold of it. You have them talking about the Verstraeten study and saying there’s a clear link, not just with autism but with the whole range of neurological disorders—speech delay, language delay, all kinds of learning disorders, ADD, hyperactivity disorder—and the injection of these vaccines.
[. . .]and at the end of that meeting they make a few decisions. One is Verstraeten, the man who designed who conducted the study, is hired the next day by GlaxoSmithKline and shipped off to switzerland, and six months later he sends in a redesigned study that includes cohorts who are too young to have been diagnosed as autistic. So he loads the study down, the data down, and they tell the public that they’ve lost all the original data. This is what CDC says till this day: That it does not know what happened to the original data in the Verstraeten study. And they published this other study that is a corrupt and crooked—what we call tobacco science done by a bunch of bio-stitutes, crooked scientists who are trying to fool the American public.
Then Kathleen Stratton from CDC and IOM says “What we need is we need some studies that will disprove the
link.” So they work with the vaccine industry to gin up these four phony European studies that are done by vaccine industry employees, funded by the vaccine industry and published in the American Academy of Pediatrics magazine, which receives 80% of its revenue from the vaccine industry. And none of these scientists disclose any of their myriad conflicts which conventional ethics rules require them to do. It’s not disclosed.
SOURCE: RFK JR. Vaccine Cover Up SIMPSONWOOD MEMO
TOM CLARKE: 64,000 people dead. Tens of thousands hospitalized. A country crippled by a virus.
The predictions of the impact of swine flu on Britain were grim. The government’s response: Spending hundreds of millions of pounds on antiviral drugs and vaccines adverts and leaflets. But ten months into the pandemic, only 355 Britons have died and globally the virus hasn’t lived up to our fears.
Were government’s misled into preparing for the worst? Politicians in Brussels are now asking for an investigation into the role pharmaceutical companies played in influencing political decisions that led to a swine flu spending spree.
WOLFGANG WODARG: There must be a process to to get more transparency [about] how the decisions in the WHO function and who is influencing the decisions of the WHO and what is the role of pharmaceutical industry there. I’m very suspicious about the processes which are behind this pandemic.
TOM CLARKE: The Council of Europe Committee want the investigation to focus on the World Health Organization’s decision to lower the threshold required for a pandemic to be formally declared.
MARGARET CHAN: The world is now at the start of the 2009 influenza pandemic.
REPORTER: When this happened in June last year, government’s had to activate huge, pre-prepared contracts for drugs and vaccines with manufacturers. They also want to probe ties between key WHO advisors and drug companies.
PAUL FLYNN: Who is deciding what the risk is? Is it the pharmaceutical companies who want to sell drugs, or is it someone making a decision based on the perceived danger? In this case it appears that the danger was vastly exaggerated. And was it exaggerated by the pharmaceutical companies in order to make money?
SOURCE: Channel 4 News Exposes Swine Flu Scandal
JAMES CORBETT: And a perfect example of that came out just in the past month where it was discovered, revealed—”Oh my God! Who would have thought it?”—people who consume artificial sweeteners like aspartame are three times more likely to suffer from a common form of stroke than others. Who would have thought it (except everyone who’s been worming warning about aspartame for decades and decades)?
And if you want to know more about aspartame and how it got approved in the first place you can go back and listen to my earlier podcast on “Meet Donald Rumsfeld” where we talked about his role in getting aspartame approved for human consumption in the first place. But yes, now decades later they come out with a study that shows “Well guys, we had no idea, but guess what? It does apparently cause strokes!”
And this is particularly galling, I suppose, because if you go back even a couple of years ago the paper of record, the “Old Gray Lady,” the New York Times (and every other publication, to be fair) that ever tried to address this would always talk about sweeteners as being better than sugar for you. And they would point to a handful of studies. The same studies every time, including—I mean, just as one example this 2007 study which was a peer review study [that went] through various different studies that had been published, and this was done by a “panel of experts” as it was said at the time. And it was cited in all of these different reports by the New York Times and others as showing that aspartame was even safer than sugar and blah blah blah. And when you actually looked at the study itself you found that—lo and behold!—the “panel of experts” was put together by something called “the burdock group” which was a consulting firm that worked for the food industry amongst others and was in that particular instance hired by ajinomoto, who people might know as a producer of aspartame.
So, yes, you have the aspartame manufacturers hiring consultants to put together panels of scientific scientific experts that then come out with the conclusion that, “Yes! Aspartame is sweet as honey and good for you like breathing oxygen. It’s just so wonderful! Oh, it’s like manna from heaven!” And lo and behold they were lying. Who would have thought it? Who would have imagined that the scientific process could be so thoroughly corrupted?
SOURCE: The Weaponization of “Science”
Sadly, there is no lack of examples of how commercial interests have skewed research in a range of disciplines.
In some cases, inconvenient data is simply hidden from the public. This was what happened with “Project 259,” a feeding experiment in which lab rats were separated into two groups: One was given a high-sugar diet and the other was given a so-called “basic PRM diet” of cereal meals, soybean meals, whitefish meal, and dried yeast. The results were astounding. Not only did the study provide the first experimental evidence that sugar and starch are actually metabolized differently, but it also found that “sucrose [. . .] may have a role in the pathogenesis of bladder cancer.” But Project 259 was being funded by something called the “Sugar Research Foundation,” which has organizational ties to the trade association of the US sugar industry. As a result, the study was shelved, the results were kept from the public and it took 51 years for the experiment to be dug up by researchers and published. But this was too late for the generation of victims that The Sugar Conspiracy created, raised on a low-fat, high sugar diet that is now known to be toxic.
In other cases, industry secretly sponsors and even covertly promotes questionable research that bolsters claims to their product’s safety. This is the case of Johnson & Johnson, which was facing a potential scandal over revelations that its baby powder contained asbestos. They hired an Italian physician to conduct a study on the health of talc miners in the Italian Alps, and even told him what the study should find: data that “would show that the incidence of cancer in these subjects is no different from that of the Italian population or the rural control group.” When the physician came back with the data as instructed, J&J were unhappy with the form and style of the study’s write up, so they handed it to a scientific ghostwriter to prepare it for publication. The ghostwritten paper was then published in the Journal of Occupational and Environmental Medicine, and the research was cited by a review article in the British Journal of Industrial Medicine later that year, which concluded that there is no evidence suggesting that the “normal use” of cosmetic talc poses a health hazard. That review article was written by Gavin Hildick-Smith, the Johnson & Johnson physician executive who had commissioned the Italian study, dictated its findings and sent it out for ghostwriting. Dr. Hildick-Smith failed to disclose this conflict in his review article, however.
The list of such egregious abuses of “scientific” institutions and processes is seemingly endless, with more stories surfacing on a weekly basis. Websites like Retraction Watch attempt to document fraud and misconduct in science as it is revealed, but stories about the corporate hand behind key research studies or conspiracies to cover up inconvenient research are reported in a haphazard fashion and generally receive little traction with the public.
But these are not new issues. There have been those warning us about the dangerous confluence of money, government power and science since the birth of the modern era.
DWIGHT D. EISENHOWER: Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.
The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.
SOURCE: Eisenhower Farewell Address
In his prescient warning, Eisenhower not only gave a name to the “military-industrial complex” that has been working to steer American foreign policy since the end of the second World War, but he also warned how the government can shape the course of scientific research with its funding. Is it any wonder, then, that military contractors like Raytheon, Lockheed Martin and Northrop Grumman are among the leading funders in cutting edge research in nanotechnology, quantum computing, “human systems optimization” and other important scientific endeavors? Or that the Pentagon’s own Defense Advanced Research Projects Agency provides billions of dollars per year to help find military applications for breakthroughs in computer science, molecular biology, robotics and other high-cost scientific research?
And what does this mean for researchers who are looking to innovate in areas that do not have military or commercial use?
Yes, there is not just one crisis of science, but multiple crises. And, like many other crises, they find a common root in the pressures that come from funding large-scale, capital-intensive, industrial research.
But this is not simply a problem of money, and it will not be solved by money. There are deeper social, political and structural roots of this crisis that will need to be addressed before we understand how to truly mitigate these problems and harness the transformative power of scientific research to improve our lives. In the next edition of The Corbett Report, we will examine and dissect the various proposals for solving the crisis of science.
Solving this crisis—these crises—is important. The scientific method is valuable. We should not throw out the baby of scientific knowledge with the bathwater of scientific corruption. But we need to stop treating science as a magic 8-ball that can solve all of our societal and political problems. And we need to stop venerating scientists as a quasi-priest class whose dictates are beyond question by the unwashed masses.
After all, when an Ipsos MORI poll found that nine out of ten British people would trust scientists to “follow the rules,” even Nature‘s editorial board was compelled to ask: “How many scientists would say the same?”