Thursday, January 2, 2020

The Crisis of Science with the corbett Report





After you have read through this, you will know that the vaccine industry is the driver behind childhood autism in particular and a host of related conditions and this fact was deliberately and consciously hidden by the industry itself.  There is no plausible deniability at all.

 At the same time, government and the entire population is virtually extorted into vaccine compliance in a system comparable to a military draft.  How have we ever come to this?

There was always a better path forward based on careful science.  There is still is a path forward, but now it must extract the whole enterprise out of the hands of its current masters.  In the meantime the voice of the people is rising in outrage as slowly this knowledge leaks out.

Trouble is that the corporate world is very much continuing to game the science to begin with to suppress problems around promising new products.  Not least to minimize actual study costs.

Recall that a sample of twenty is able to give you an indicator.  It takes a sample of thousands to actually generate high confidence confirmation and many of those indications are mere false positives.

Now suppose autism odds are one in a million.  No problem in reality in an emergency involving thousands, but a certain problem if the whole population is treated with that vaccine.

..

The Crisis of Science

 
02/23/2019
https://www.corbettreport.com/sciencecrisis/
In recent years, the public has gradually discovered that there is a crisis in science. But what is the problem? And how bad is it, really? Today on The Corbett Report we shine a spotlight on the series of interrelated crises that are exposing the way institutional science is practiced today, and what it means for an increasingly science-dependent society.

For those with limited bandwidth, CLICK HERE to download a smaller, lower file size version of this episode.

For those interested in audio quality, CLICK HERE for the highest-quality version of this episode (WARNING: very large download).

Watch this video on BitChute / DTube / YouTube or Download the mp4

TRANSCRIPT

In 2015 a study from the Institute of Diet and Health with some surprising results launched a slew of click bait articles with explosive headlines:

“Chocolate accelerates weight loss” insisted one such headline.

“Scientists say eating chocolate can help you lose weight” declared another.

“Lose 10% More Weight By Eating A Chocolate Bar Every Day…No Joke!” promised yet another.

There was just one problem: This was a joke.

The head researcher of the study, “Johannes Bohannon,” took to io9 in May of that year to reveal that his name was actually John Bohannon, the “Institute of Diet and Health” was in fact nothing more than a website, and the study showing the magical weight loss effects of chocolate consumption was bogus. The hoax was the brainchild of a German television reporter who wanted to “demonstrate just how easy it is to turn bad science into the big headlines behind diet fads.”

Given how widely the study’s surprising conclusion was publicized—from the pages of Bild, Europe’s largest daily newspaper to the TV sets of viewers in Texas and Australia—that demonstration was remarkably successful. But although it’s tempting to write this story off as a demonstration about gullible journalists and the scientific illiteracy of the press, the hoax serves as a window into a much larger, much more troubling story.

That story is The Crisis of Science.


What makes the chocolate weight loss study so revealing isn’t that it was completely fake; it’s that in an important sense it wasn’t fake. Bohannes really did conduct a weight loss study and the data really does support the conclusion that subjects who ate chocolate on a low-carb diet lose weight faster than those on a non-chocolate diet. In fact, the chocolate dieters even had better cholesterol readings. 

The trick was all in how the data was interpreted and reported.

As Bohannes explained in his post-hoax confession:
“Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a ‘statistically significant’ result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.”
You see, finding a “statistically significant result” sounds impressive and helps scientists to get their paper published in high-impact journals, but “statistical significance” is in fact easy to fake. If, like Bohannes, you use a small sample size and measure for 18 different variables, it’s almost impossible not to find some “statistically significant” result. Scientists know this, and the process of sifting through data to find “statistically significant” (but ultimately meaningless) results is so common that it has its own name: “p-hacking” or “data dredging.”

But p-hacking only scrapes the surface of the problem. From confounding factors to normalcy bias to publication pressures to outright fraud, the once-pristine image of science and scientists as an impartial font of knowledge about the world has been seriously undermined over the past decade.

Although these types of problems are by no means new, they came into vogue when John Ioannidis, a physician, researcher and writer at the Stanford Prevention Research Center, rocked the scientific community with his landmark paper “Why Most Published Research Findings Are False.” The 2005 paper addresses head on the concern that “most current published research findings are false,” asserting that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” The paper has achieved iconic status, becoming the most downloaded paper in the Public Library of Science and launching a conversation about false results, fake data, bias, manipulation and fraud in science that continues to this day.

JOHN IOANNIDIS: This is a paper that is practically presenting a mathematical modeling of what are the chances that a research finding that is published in the literature would be true. And it uses different parameters, different aspects, in terms of: What we know before; how likely it is for something to be true in a field; how much bias are maybe in the field; what kind of results we get; and what are the statistics that are presented for the specific result.

I have been humbled that this work has drawn so much attention and people from very different scientific fields—ranging not just bio-medicine, but also psychological science, social science, even astrophysics and the other more remote disciplines—have been attracted to what that paper was trying to do.

SOURCE: John Ioannidis on Moving Toward Truth in Scientific Research
Since Ioannidis’ paper took off, the “crisis of science” has become a mainstream concern, generating headlines in the mainstream press like The Washington Post, The Economist and The Times Higher Education Supplement. It has even been picked up by mainstream science publications like Scientific American, Nature and phys.org.

So what is the problem? And how bad is it, really? And what does it mean for an increasingly tech-dependent society that something is rotten in the state of science?

To get a handle on the scope of this dilemma, we have to realize that the “crisis” of science isn’t a crisis at all, but a series of interrelated crises that get to the heart of the way institutional science is practiced today.

First, there is the Replication Crisis.

This is the canary in the coalmine of the scientific crisis in general because it tells us that a surprising percentage of scientific studies, even ones published in top-tier academic journals that are often thought of as the gold standard for experimental research, cannot be reliably reproduced. This is a symptom of a larger crisis because reproducibility is considered to be a bedrock of the scientific process.

In a nutshell, an experiment is reproducible if independent researchers can run the same experiment and get the same results at a later date. It doesn’t take a rocket scientist to understand why this is important. If an experiment is truly revealing some fundamental truth about the world then that experiment should yield the same results under the same conditions anywhere and at any time (all other things being equal).

Well, not all things are equal.

In the opening years of this decade, the Center for Open Science led a team of 240 volunteer researchers in a quest to reproduce the results of 100 psychological experiments. These experiments had all been published in three of the most prestigious psychology journals. The results of this attempt to replicate these experiments, published in 2015 in a paper on “Estimating the Reproducibility of Psychological Science,” were abysmal. Only 39 of the experimental results could be reproduced.

Worse yet for those who would defend institutional science from its critics, these results are not confined to the realm of psychology. In 2011, Nature published a paper showing that researchers were only able to reproduce between 20 and 25 per cent of 67 published preclinical drug studies. They published another paper the next year with an even worse result: researchers could only reproduce six of a total of 53 “landmark” cancer studies. That’s a reproducibility rate of 11%.

These studies alone are persuasive, but the cherry on top came in May 2016 when Nature published the results of a survey of over 1,500 scientists finding fully 70% of them had tried and failed to reproduce published experimental results at some point. The poll covered researchers from a range of disciplines, from physicists and chemists to earth and environmental scientists to medical researchers and assorted others.

So why is there such a widespread inability to reproduce experimental results? There are a number of reasons, each of which give us another window into the greater crisis of science.

The simplest answer is the one that most fundamentally shakes the widespread belief that scientists are disinterested truthseekers who would never dream of publishing a false result or deliberately mislead others.
JAMES EVAN PILATO: Survey sheds light on the ‘crisis’ rocking research.

More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature’s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.

The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology1 and cancer biology2, found rates of around 40% and 10%, respectively.

So the headline of this article, James, that we grabbed from our buddy Doug at BlackListed News: “40 percent of scientists admit that fraud is always or often a factor that contributes to irreproducible research.”

SOURCE: Scientists Say Fraud Causing Crisis of Science – #NewWorldNextWeek
In fact, the data shows that the Crisis of Fraud in scientific circles is even worse than scientists will admit. A study published in 2012 found that fraud or suspected fraud was responsible for 43% of scientific paper retractions, by far the single leading cause of retraction. The study demonstrated a 1000% increase in (reported) scientific fraud since 1975. Together with “duplicate publication” and “plagiarism,” misconduct of one form or another accounted for two-thirds of all retractions.

So much for scientists as disinterested truth-tellers.

Indeed, instances of scientific fraud are cropping up more and more in the headlines these days.
Last year, Kohei Yamamizu of the Center for iPS Cell Research and Application was found to have completely fabricated the data for his 2017 paper in the journal Stem Cell Reports, and earlier this year it was found that Yamamizu’s data fabrication was more extensive than previously thought, with a paper from 2012 also being retracted due to doubtful data.

Another Japanese researcher, Haruko Obokata, was found to have manipulated images to get her landmark study on stem cell creation published in Nature. The study was retracted and one of Obokata’s co-authors committed suicide when the fraud was discovered.

Similar stories of fraud behind retracted stem cell papers, molecular-scale transistor breakthroughs, psychological studies and a host of other research calls into question the very foundations of the modern system of peer-reviewed, reproducible science, which is supposed to mitigate fraudulent activity by carefully checking and, where appropriate, repeating important research.

There are a number of reasons why fraud and misconduct is on the rise, and these relate to more structural problems that unveil yet more crises in science.

Like the Crisis of Publication.

We’ve all heard of “publish or perish” by now. It means that only researchers who have a steady flow of published papers to their name are considered for the plush positions in modern-day academia.

This pressure isn’t some abstract or unstated force; it is direct and explicit. Until recently the medical department at London’s Imperial College told researchers that their target was to “publish three papers per annum including one in a prestigious journal with an impact factor of at least five.” Similar guidelines and quotas are enacted in departments throughout academia.

And so, like any quota-based system, people will find a way to cheat their way to the goal. Some attach their names to work they have little to do with. Others publish in pay-to-play journals that will publish anything for a small fee. And others simply fudge their data until they get a result that will grab headlines and earn a spot in a high-profile journal.

It’s easy to see how fraudulent or irreproducible data results from this pressure. The pressure to publish in turn puts pressure on researchers to produce data that will be “new” and “unexpected.” A study finding that drinking 5 cups of coffee a day increases your chance of urinary tract cancer (or decreases your chance of stroke) is infinitely more interesting (and thus publishable) than a study finding mixed results, or no discernible effect. So studies finding a surprising result (or ones that can be manipulated into showing surprising results) will be published and those with negative results will not. This makes it much harder for future scientists to get an accurate assessment of the state of research in any given field, since untold numbers of experiments with negative results never get published, and thus never see the light of day.

But the pressure to publish in high-impact, peer-reviewed journals itself raises the specter of another crisis: The Crisis of Peer Review.

The peer review process is designed as a check against fraud, sloppy research and other problems that arise when journal editors are determining whether to publish a paper. In theory, the editor of the journal passes the paper to another researcher in the same field who can then check that the research is factual, relevant, novel and sufficient for publication.

In practice, the process is never quite so straightforward.

The peer review system is in fact rife with abuse, but few cases are as flagrant as that of Hyung-In Moon. Moon was a medicinal-plant researcher at Dongguk University in Gyeongju, South Korea, who aroused suspicions by the ease with which his papers were reviewed. Most researchers are too busy to review other papers at all, but the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry noticed that the reviewers for Moon’s papers were not only always available, but that they usually submitted their review notes within 24 hours. When confronted by the editor about this suspiciously quick work, Moon admitted that he had written most of the reviews himself. He had simply gamed the system, where most journals ask researchers to submit names of potential reviewers for their papers, by creating fake names and email addresses and then submitting “reviews” of his own work.

Beyond the incentivization of fraud and opportunities for gaming the system, however, the peer review process has other, more structural problems. In certain specialized fields there are only a handful of scientists qualified to review new research in the discipline, meaning that this clique effectively forms a team of gatekeepers over an entire branch of science. They often know each other personally, meaning any new research they conduct is certain to be reviewed by one of their close associates (or their direct rivals). This “pal review” system also helps to solidify dogma in echo chambers where the same few people who go to the same conferences and pursue research along the same lines can prevent outsiders with novel approaches from entering the field of study.

In the most egregious cases, as with researchers in the orbit of the Climate Research Unit at the University of East Anglia, groups of scientists have been caught conspiring to oust an editor from a journal that published papers that challenged their own research and even conspiring to “redefine what the peer-review literature is” in order to stop rival researchers from being published at all.

So, in short: Yes, there is a Replication Crisis in science. And yes, it is caused by a Crisis of Fraud. And yes, the fraud is motivated by a Crisis of Publication. And yes, those crises are further compounded by a Crisis of Peer Review.

But what creates this environment in the first place? What is the driving factor that keeps this whole system going in the face of all these crises? The answer isn’t difficult to understand. It’s the same thing that puts pressure on every other aspect of the economy: funding.

Modern laboratories investigating cutting edge questions involve expensive technology and large teams of researchers. The types of labs producing truly breakthrough results in today’s environment are the ones that are well funded. And there are only two ways for scientists to get big grants in our current system: big business or big government. So it should be no surprise that “scientific” results, so suspectible to the biases, frauds and manipulations that constitute the crises of science, are up for sale by scientists who are willing to provide dodgy data for dirty dollars to large corporations and politically-motivated government agencies.
RFK JR.: “Simpsonwood” was the transcripts of a secret meeting that was held between CDC and 75 representatives of the vaccine industry in which they reviewed a report that CDC had ordered—the Verstraeten study—of a hundred thousand children in the United States vaccine safety database. And when they looked at it themselves, they said, quote: “It is impossible to massage this data to make the signal go away. There is no denying that there is a connection between autism and thimerosal in the vaccines.” And this is what they said. I didn’t say this. This is what their own scientists [said] and their own conclusion of the best doctors, the top people at CDC, the top people at the pharmaceutical injury industry.

And you know, when they had this meeting they had it not in Atlanta—which was the headquarters of the CDC—but in Simpsonwood at a private conference center, because they believed that that would make them able to insulate themselves from a court request under the Freedom of Information Law and they would not have to disclose the transcripts of these meetings to the public. Somebody transcribed the meetings and we were able to get a hold of it. You have them talking about the Verstraeten study and saying there’s a clear link, not just with autism but with the whole range of neurological disorders—speech delay, language delay, all kinds of learning disorders, ADD, hyperactivity disorder—and the injection of these vaccines.

[. . .]and at the end of that meeting they make a few decisions. One is Verstraeten, the man who designed who conducted the study, is hired the next day by GlaxoSmithKline and shipped off to switzerland, and six months later he sends in a redesigned study that includes cohorts who are too young to have been diagnosed as autistic. So he loads the study down, the data down, and they tell the public that they’ve lost all the original data. This is what CDC says till this day: That it does not know what happened to the original data in the Verstraeten study. And they published this other study that is a corrupt and crooked—what we call tobacco science done by a bunch of bio-stitutes, crooked scientists who are trying to fool the American public.

Then Kathleen Stratton from CDC and IOM says “What we need is we need some studies that will disprove the link.” So they work with the vaccine industry to gin up these four phony European studies that are done by vaccine industry employees, funded by the vaccine industry and published in the American Academy of Pediatrics magazine, which receives 80% of its revenue from the vaccine industry. And none of these scientists disclose any of their myriad conflicts which conventional ethics rules require them to do. It’s not disclosed.

TOM CLARKE: 64,000 people dead. Tens of thousands hospitalized. A country crippled by a virus.

The predictions of the impact of swine flu on Britain were grim. The government’s response: Spending hundreds of millions of pounds on antiviral drugs and vaccines adverts and leaflets. But ten months into the pandemic, only 355 Britons have died and globally the virus hasn’t lived up to our fears.

Were government’s misled into preparing for the worst? Politicians in Brussels are now asking for an investigation into the role pharmaceutical companies played in influencing political decisions that led to a swine flu spending spree.

WOLFGANG WODARG: There must be a process to to get more transparency [about] how the decisions in the WHO function and who is influencing the decisions of the WHO and what is the role of pharmaceutical industry there. I’m very suspicious about the processes which are behind this pandemic.

TOM CLARKE: The Council of Europe Committee want the investigation to focus on the World Health Organization’s decision to lower the threshold required for a pandemic to be formally declared.

MARGARET CHAN: The world is now at the start of the 2009 influenza pandemic.

REPORTER: When this happened in June last year, government’s had to activate huge, pre-prepared contracts for drugs and vaccines with manufacturers. They also want to probe ties between key WHO advisors and drug companies.

PAUL FLYNN: Who is deciding what the risk is? Is it the pharmaceutical companies who want to sell drugs, or is it someone making a decision based on the perceived danger? In this case it appears that the danger was vastly exaggerated. And was it exaggerated by the pharmaceutical companies in order to make money?

JAMES CORBETT: And a perfect example of that came out just in the past month where it was discovered, revealed—”Oh my God! Who would have thought it?”—people who consume artificial sweeteners like aspartame are three times more likely to suffer from a common form of stroke than others. Who would have thought it (except everyone who’s been worming warning about aspartame for decades and decades)?

And if you want to know more about aspartame and how it got approved in the first place you can go back and listen to my earlier podcast on “Meet Donald Rumsfeld” where we talked about his role in getting aspartame approved for human consumption in the first place. But yes, now decades later they come out with a study that shows “Well guys, we had no idea, but guess what? It does apparently cause strokes!”

And this is particularly galling, I suppose, because if you go back even a couple of years ago the paper of record, the “Old Gray Lady,” the New York Times (and every other publication, to be fair) that ever tried to address this would always talk about sweeteners as being better than sugar for you. And they would point to a handful of studies. The same studies every time, including—I mean, just as one example this 2007 study which was a peer review study [that went] through various different studies that had been published, and this was done by a “panel of experts” as it was said at the time. And it was cited in all of these different reports by the New York Times and others as showing that aspartame was even safer than sugar and blah blah blah. And when you actually looked at the study itself you found that—lo and behold!—the “panel of experts” was put together by something called “the burdock group” which was a consulting firm that worked for the food industry amongst others and was in that particular instance hired by ajinomoto, who people might know as a producer of aspartame.

So, yes, you have the aspartame manufacturers hiring consultants to put together panels of scientific scientific experts that then come out with the conclusion that, “Yes! Aspartame is sweet as honey and good for you like breathing oxygen. It’s just so wonderful! Oh, it’s like manna from heaven!” And lo and behold they were lying. Who would have thought it? Who would have imagined that the scientific process could be so thoroughly corrupted?

Sadly, there is no lack of examples of how commercial interests have skewed research in a range of disciplines.

In some cases, inconvenient data is simply hidden from the public. This was what happened with “Project 259,” a feeding experiment in which lab rats were separated into two groups: One was given a high-sugar diet and the other was given a so-called “basic PRM diet” of cereal meals, soybean meals, whitefish meal, and dried yeast. The results were astounding. Not only did the study provide the first experimental evidence that sugar and starch are actually metabolized differently, but it also found that “sucrose [. . .] may have a role in the pathogenesis of bladder cancer.” But Project 259 was being funded by something called the “Sugar Research Foundation,” which has organizational ties to the trade association of the US sugar industry. As a result, the study was shelved, the results were kept from the public and it took 51 years for the experiment to be dug up by researchers and published.  But this was too late for the generation of victims that The Sugar Conspiracy created, raised on a low-fat, high sugar diet that is now known to be toxic.

In other cases, industry secretly sponsors and even covertly promotes questionable research that bolsters claims to their product’s safety. This is the case of Johnson & Johnson, which was facing a potential scandal over revelations that its baby powder contained asbestos.  They hired an Italian physician to conduct a study on the health of talc miners in the Italian Alps, and even told him what the study should find: data that “would show that the incidence of cancer in these subjects is no different from that of the Italian population or the rural control group.” When the physician came back with the data as instructed, J&J were unhappy with the form and style of the study’s write up, so they handed it to a scientific ghostwriter to prepare it for publication.  The ghostwritten paper was then published in the Journal of Occupational and Environmental Medicine, and the research was cited by a review article in the British Journal of Industrial Medicine later that year, which concluded that there is no evidence suggesting that the “normal use” of cosmetic talc poses a health hazard. That review article was written by Gavin Hildick-Smith, the Johnson & Johnson physician executive who had commissioned the Italian study, dictated its findings and sent it out for ghostwriting. Dr. Hildick-Smith failed to disclose this conflict in his review article, however.


The list of such egregious abuses of “scientific” institutions and processes is seemingly endless, with more stories surfacing on a weekly basis. Websites like Retraction Watch attempt to document fraud and misconduct in science as it is revealed, but stories about the corporate hand behind key research studies or conspiracies to cover up inconvenient research are reported in a haphazard fashion and generally receive little traction with the public.


But these are not new issues. There have been those warning us about the dangerous confluence of money, government power and science since the birth of the modern era.

DWIGHT D. EISENHOWER: Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.
The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present—and is gravely to be regarded.
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

In his prescient warning, Eisenhower not only gave a name to the “military-industrial complex” that has been working to steer American foreign policy since the end of the second World War, but he also warned how the government can shape the course of scientific research with its funding. Is it any wonder, then, that military contractors like Raytheon, Lockheed Martin and Northrop Grumman are among the leading funders in cutting edge research in nanotechnology, quantum computing, “human systems optimization” and other important scientific endeavors? Or that the Pentagon’s own Defense Advanced Research Projects Agency provides billions of dollars per year to help find military applications for breakthroughs in computer science, molecular biology, robotics and other high-cost scientific research?

And what does this mean for researchers who are looking to innovate in areas that do not have military or commercial use?

Yes, there is not just one crisis of science, but multiple crises. And, like many other crises, they find a common root in the pressures that come from funding large-scale, capital-intensive, industrial research.

But this is not simply a problem of money, and it will not be solved by money. There are deeper social, political and structural roots of this crisis that will need to be addressed before we understand how to truly mitigate these problems and harness the transformative power of scientific research to improve our lives. In the next edition of The Corbett Report, we will examine and dissect the various proposals for solving the crisis of science.

Solving this crisis—these crises—is important. The scientific method is valuable. We should not throw out the baby of scientific knowledge with the bathwater of scientific  corruption. But we need to stop treating science as a magic 8-ball that can solve all of our societal and political problems. And we need to stop venerating scientists as a quasi-priest class whose dictates are beyond question by the unwashed masses.

After all, when an Ipsos MORI poll found that nine out of ten British people would trust scientists to “follow the rules,” even Nature‘s editorial board was compelled to ask: “How many scientists would say the same?”

No comments:

Post a Comment