California Forum

Academic pressure can cook up dubious science research

A patient receives cancer treatments with Avastin, or bevacizumab, in 2005. At first touted for excellent results, the cancer drug ultimately was found to have no effect on overall survival or the patients’ quality of life.
A patient receives cancer treatments with Avastin, or bevacizumab, in 2005. At first touted for excellent results, the cancer drug ultimately was found to have no effect on overall survival or the patients’ quality of life. AP file

The public’s waning faith in science is only partly a reflection of our social and political times. Yes, there are the willfully ignorant people who refuse to admit that climate change is real and caused in large part by human activity, because that would mean giving up gas guzzlers and other polluting comforts of life.

Others get their woeful information about vaccines from social media instead of finding out the facts about the billions of lives that have been saved by immunization – half a billion from smallpox alone, compared with the last century before its eradication.

Want to inflict harm on drug companies? Get everyone vaccinated and wipe out as many vaccine-preventable diseases as possible, as we did with smallpox. Pharmaceutical companies can’t sell vaccines for a disease that doesn’t exist.

But not all science doubt is illegitimate.

People have a certain amount of justification when they complain about health studies involving food and diet. One year’s nutrition darling is the next month’s dietary pariah. We weren’t supposed to eat more than a tiny bit of eggs and shellfish, until it was discovered that for most people, dietary cholesterol has little if any impact on blood cholesterol. Low-fat diets were in, and then out. Trans-fat-laden margarine used to be recommended over butter. Vitamin E supplements were touted as preventing heart disease, but later studies found they weren’t a benefit – and could be dangerous. And who even knows at this point what the story is on coconut oil and red wine?

Studies in the social sciences are another wobbly area. In 2015, a team of scientists launched a project to do something far too rare in the world of research: Attempt to replicate 100 psychology studies. Those studies were seemingly among the soundest in their field; they all were published in top journals, where the vetting is generally stronger. Some were groundbreaking studies that were cited in later research. Yet the replication team found that they couldn’t reproduce most of the results in the 100 studies.

John Ioannidis, a Stanford University epidemiologist who is renowned for his work on the trustworthiness of scientific studies, found that this problem wasn’t limited to food and the “squishier” social sciences. He led a 2012 statistical analysis of almost 230,000 trials in various disciplines, including drug studies. The analysis found that studies that claimed to have found dramatic effects from, say, a medication were seldom backed up by subsequent studies. In fact, in 90 percent of the cases, a much smaller effect or none at all was found. The cancer drug Avastin, at first touted for its supposedly excellent results, was ultimately found to have no effect on overall survival or the patients’ quality of life.

Ioannidis is particularly well known for a 2005 study with a dramatic title: “Why Most Published Research Findings Are False.” Actually, the article itself has a different message: He simply argued that researchers, journals and universities should not be claiming conclusions until a study has been backed up with a body of high-quality research that reaches the same findings.

When I interviewed Ioannidis in 2015, he said there are many reasons why his recommendation hasn’t been followed. Researchers are under pressure to bring in grants and have their work published. Scientific journals look for dramatic new findings, so that’s where researchers go, too. Few scientists want to bother with replication studies, because those deal with old news. No glory, less chance of publication. They look for the most provocative results, and if studies aren’t reaching the conclusions researchers had hoped, they’ll often abandon the study, even though “negative results” – finding that something doesn’t work – can be just as important as finding that it does.

Some other researchers massage and cherry-pick their data to make results look more dramatic than they really are. A New York Times science reporter recently wrote an excellent article on this – and on a once-respected Cornell University researcher who took it to extreme levels.

University press offices are under pressure to bring attention to their institutions, so they often hype research findings, Ioannidis said. And journalists, in search of click-worthy copy for ever-hungry websites, often don’t bother asking the necessary questions. Some of them simply post university press releases nearly verbatim. That’s how we end up with inane pronouncements that champagne can prevent Alzheimer’s in older adults. (There’s no real evidence that it does.)

The academic research world is overdue for self-examination. It can’t blame all of the anti-science rhetoric these days on narrow-minded deniers. A big part of the responsibility lies squarely with the researchers, their universities and the ever-bigger push for academic fame and funding.

Karin Klein is a freelance journalist in Orange County who has covered education, science and food policy. She can be contacted at karinkleinmedia@gmail.com. Follow her on Twitter @kklein100.

  Comments