The Deplorable State of Current Government-Guild ‘Science’

Episode 353 – The Crisis of Science

 • 02/23/2019 • 57 Comments

In recent years, the public has gradually discovered that there is a crisis in science. But what is the problem? And how bad is it, really? Today on The Corbett Report we shine a spotlight on the series of interrelated crises that are exposing the way institutional science is practiced today, and what it means for an increasingly science-dependent society.

For those with limited bandwidth, CLICK HERE to download a smaller, lower file size version of this episode.

For those interested in audio quality, CLICK HERE for the highest-quality version of this episode (WARNING: very large download).

Watch this video on BitChute / DTube / YouTube or Download the mp4

TRANSCRIPT

In 2015 a study from the Institute of Diet and Health with some surprising results launched a slew of click bait articles with explosive headlines:

“Chocolate accelerates weight loss” insisted one such headline.

“Scientists say eating chocolate can help you lose weight” declared another.

“Lose 10% More Weight By Eating A Chocolate Bar Every Day…No Joke!” promised yet another.

There was just one problem: This was a joke.

The head researcher of the study, “Johannes Bohannon,” took to io9 in May of that year to reveal that his name was actually John Bohannon, the “Institute of Diet and Health” was in fact nothing more than a website, and the study showing the magical weight loss effects of chocolate consumption was bogus. The hoax was the brainchild of a German television reporter who wanted to “demonstrate just how easy it is to turn bad science into the big headlines behind diet fads.”

Given how widely the study’s surprising conclusion was publicized—from the pages of Bild, Europe’s largest daily newspaper to the TV sets of viewers in Texas and Australia—that demonstration was remarkably successful. But although it’s tempting to write this story off as a demonstration about gullible journalists and the scientific illiteracy of the press, the hoax serves as a window into a much larger, much more troubling story.

That story is The Crisis of Science.

This is The Corbett Report.

What makes the chocolate weight loss study so revealing isn’t that it was completely fake; it’s that in an important sense it wasn’t fake. Bohannes really did conduct a weight loss study and the data really does support the conclusion that subjects who ate chocolate on a low-carb diet lose weight faster than those on a non-chocolate diet. In fact, the chocolate dieters even had better cholesterol readings. The trick was all in how the data was interpreted and reported.

As Bohannes explained in his post-hoax confession:

“Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a ‘statistically significant’ result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.”

You see, finding a “statistically significant result” sounds impressive and helps scientists to get their paper published in high-impact journals, but “statistical significance” is in fact easy to fake. If, like Bohannes, you use a small sample size and measure for 18 different variables, it’s almost impossible not to find some “statistically significant” result. Scientists know this, and the process of sifting through data to find “statistically significant” (but ultimately meaningless) results is so common that it has its own name: “p-hacking” or “data dredging.”

But p-hacking only scrapes the surface of the problem. From confounding factors to normalcy bias to publication pressures to outright fraud, the once-pristine image of science and scientists as an impartial font of knowledge about the world has been seriously undermined over the past decade.

Although these types of problems are by no means new, they came into vogue when John Ioannidis, a physician, researcher and writer at the Stanford Prevention Research Center, rocked the scientific community with his landmark paper “Why Most Published Research Findings Are False.” The 2005 paper addresses head on the concern that “most current published research findings are false,” asserting that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” The paper has achieved iconic status, becoming the most downloaded paper in the Public Library of Science and launching a conversation about false results, fake data, bias, manipulation and fraud in science that continues to this day.

JOHN IOANNIDIS: This is a paper that is practically presenting a mathematical modeling of what are the chances that a research finding that is published in the literature would be true. And it uses different parameters, different aspects, in terms of: What we know before; how likely it is for something to be true in a field; how much bias are maybe in the field; what kind of results we get; and what are the statistics that are presented for the specific result.

I have been humbled that this work has drawn so much attention and people from very different scientific fields—ranging not just bio-medicine, but also psychological science, social science, even astrophysics and the other more remote disciplines—have been attracted to what that paper was trying to do.

SOURCE: John Ioannidis on Moving Toward Truth in Scientific Research

Since Ioannidis’ paper took off, the “crisis of science” has become a mainstream concern, generating headlines in the mainstream press like The Washington Post, The Economist and The Times Higher Education Supplement. It has even been picked up by mainstream science publications like Scientific AmericanNature and phys.org.

So what is the problem? And how bad is it, really? And what does it mean for an increasingly tech-dependent society that something is rotten in the state of science?

To get a handle on the scope of this dilemma, we have to realize that the “crisis” of science isn’t a crisis at all, but a series of interrelated crises that get to the heart of the way institutional science is practiced today.

First, there is the Replication Crisis.

This is the canary in the coalmine of the scientific crisis in general because it tells us that a surprising percentage of scientific studies, even ones published in top-tier academic journals that are often thought of as the gold standard for experimental research, cannot be reliably reproduced. This is a symptom of a larger crisis because reproducibility is considered to be a bedrock of the scientific process.

In a nutshell, an experiment is reproducible if independent researchers can run the same experiment and get the same results at a later date. It doesn’t take a rocket scientist to understand why this is important. If an experiment is truly revealing some fundamental truth about the world then that experiment should yield the same results under the same conditions anywhere and at any time (all other things being equal).

Well, not all things are equal.

In the opening years of this decade, the Center for Open Science led a team of 240 volunteer researchers in a quest to reproduce the results of 100 psychological experiments. These experiments had all been published in three of the most prestigious psychology journals. The results of this attempt to replicate these experiments, published in 2015 in a paper on “Estimating the Reproducibility of Psychological Science,” were abysmal. Only 39 of the experimental results could be reproduced.

Worse yet for those who would defend institutional science from its critics, these results are not confined to the realm of psychology. In 2011, Nature published a paper showing that researchers were only able to reproduce between 20 and 25 per cent of 67 published preclinical drug studies. They published another paper the next year with an even worse result: researchers could only reproduce six of a total of 53 “landmark” cancer studies. That’s a reproducibility rate of 11%.

These studies alone are persuasive, but the cherry on top came in May 2016 when Nature published the results of a survey of over 1,500 scientists finding fully 70% of them had tried and failed to reproduce published experimental results at some point. The poll covered researchers from a range of disciplines, from physicists and chemists to earth and environmental scientists to medical researchers and assorted others.

So why is there such a widespread inability to reproduce experimental results? There are a number of reasons, each of which give us another window into the greater crisis of science.

The simplest answer is the one that most fundamentally shakes the widespread belief that scientists are disinterested truthseekers who would never dream of publishing a false result or deliberately mislead others.

JAMES EVAN PILATO: Survey sheds light on the ‘crisis’ rocking research.

More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature’s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.

The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology1 and cancer biology2, found rates of around 40% and 10%, respectively.

So the headline of this article, James, that we grabbed from our buddy Doug at BlackListed News: “40 percent of scientists admit that fraud is always or often a factor that contributes to irreproducible research.”

SOURCE: Scientists Say Fraud Causing Crisis of Science – #NewWorldNextWeek

In fact, the data shows that the Crisis of Fraud in scientific circles is even worse than scientists will admit. A study published in 2012 found that fraud or suspected fraud was responsible for 43% of scientific paper retractions, by far the single leading cause of retraction. The study demonstrated a 1000% increase in (reported) scientific fraud since 1975. Together with “duplicate publication” and “plagiarism,” misconduct of one form or another accounted for two-thirds of all retractions.

So much for scientists as disinterested truth-tellers.

Indeed, instances of scientific fraud are cropping up more and more in the headlines these days.

Last year, Kohei Yamamizu of the Center for iPS Cell Research and Application was found to have completely fabricated the data for his 2017 paper in the journal Stem Cell Reports, and earlier this year it was found that Yamamizu’s data fabrication was more extensive than previously thought, with a paper from 2012 also being retracted due to doubtful data.

Another Japanese researcher, Haruko Obokata, was found to have manipulated images to get her landmark study on stem cell creation published in Nature. The study was retracted and one of Obokata’s co-authors committed suicide when the fraud was discovered.

Similar stories of fraud behind retracted stem cell papersmolecular-scale transistor breakthroughspsychological studies and a host of other research calls into question the very foundations of the modern system of peer-reviewed, reproducible science, which is supposed to mitigate fraudulent activity by carefully checking and, where appropriate, repeating important research.

There are a number of reasons why fraud and misconduct is on the rise, and these relate to more structural problems that unveil yet more crises in science.

Like the Crisis of Publication.

We’ve all heard of “publish or perish” by now. It means that only researchers who have a steady flow of published papers to their name are considered for the plush positions in modern-day academia.

This pressure isn’t some abstract or unstated force; it is direct and explicit. Until recently the medical department at London’s Imperial College told researchers that their target was to “publish three papers per annum including one in a prestigious journal with an impact factor of at least five.” Similar guidelines and quotas are enacted in departments throughout academia.

And so, like any quota-based system, people will find a way to cheat their way to the goal. Some attach their names to work they have little to do with. Others publish in pay-to-play journals that will publish anything for a small fee. And others simply fudge their data until they get a result that will grab headlines and earn a spot in a high-profile journal.

It’s easy to see how fraudulent or irreproducible data results from this pressure. The pressure to publish in turn puts pressure on researchers to produce data that will be “new” and “unexpected.” A study finding that drinking 5 cups of coffee a day increases your chance of urinary tract cancer (or decreases your chance of stroke) is infinitely more interesting (and thus publishable) than a study finding mixed results, or no discernible effect. So studies finding a surprising result (or ones that can be manipulated into showing surprising results) will be published and those with negative results will not. This makes it much harder for future scientists to get an accurate assessment of the state of research in any given field, since untold numbers of experiments with negative results never get published, and thus never see the light of day.

But the pressure to publish in high-impact, peer-reviewed journals itself raises the specter of another crisis: The Crisis of Peer Review.

The peer review process is designed as a check against fraud, sloppy research and other problems that arise when journal editors are determining whether to publish a paper. In theory, the editor of the journal passes the paper to another researcher in the same field who can then check that the research is factual, relevant, novel and sufficient for publication.

In practice, the process is never quite so straightforward.

The peer review system is in fact rife with abuse, but few cases are as flagrant as that of Hyung-In Moon. Moon was a medicinal-plant researcher at Dongguk University in Gyeongju, South Korea, who aroused suspicions by the ease with which his papers were reviewed. Most researchers are too busy to review other papers at all, but the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry noticed that the reviewers for Moon’s papers were not only always available, but that they usually submitted their review notes within 24 hours. When confronted by the editor about this suspiciously quick work, Moon admitted that he had written most of the reviews himself. He had simply gamed the system, where most journals ask researchers to submit names of potential reviewers for their papers, by creating fake names and email addresses and then submitting “reviews” of his own work.

Beyond the incentivization of fraud and opportunities for gaming the system, however, the peer review process has other, more structural problems. In certain specialized fields there are only a handful of scientists qualified to review new research in the discipline, meaning that this clique effectively forms a team of gatekeepers over an entire branch of science. They often know each other personally, meaning any new research they conduct is certain to be reviewed by one of their close associates (or their direct rivals). This “pal review” system also helps to solidify dogma in echo chambers where the same few people who go to the same conferences and pursue research along the same lines can prevent outsiders with novel approaches from entering the field of study.

In the most egregious cases, as with researchers in the orbit of the Climate Research Unit at the University of East Anglia, groups of scientists have been caught conspiring to oust an editorfrom a journal that published papers that challenged their own research and even conspiring to “redefine what the peer-review literature is” in order to stop rival researchers from being published at all.

So, in short: Yes, there is a Replication Crisis in science. And yes, it is caused by a Crisis of Fraud. And yes, the fraud is motivated by a Crisis of Publication. And yes, those crises are further compounded by a Crisis of Peer Review.

But what creates this environment in the first place? What is the driving factor that keeps this whole system going in the face of all these crises? The answer isn’t difficult to understand. It’s the same thing that puts pressure on every other aspect of the economy: funding.

Modern laboratories investigating cutting edge questions involve expensive technology and large teams of researchers. The types of labs producing truly breakthrough results in today’s environment are the ones that are well funded. And there are only two ways for scientists to get big grants in our current system: big business or big government. So it should be no surprise that “scientific” results, so susceptible to the biases, frauds and manipulations that constitute the crises of science, are up for sale by scientists who are willing to provide dodgy data for dirty dollars to large corporations and politically-motivated government agencies.

Continue reading…

From The Corbett Report, here.