The Uncertainty of Science

By Chris Crockford (Editor)

Science is a tricky thing to describe. Definitions can be reductionist; for example, the Merriam-Webster Dictionary states that science is “knowledge about, or study of, the natural world based on facts learned through experiments and observation” (1). In my own field of psychology, we tend to argue that science is defined by the way in which it is carried out —the scientific method. We argue that science describes a prescribed way in which to conduct research. And yet, despite this, science is flawed because people are flawed and because it is near impossible - with the methods available to us - to know anything with absolute certainty. Even statistics, which you might assume are black and white, can make bad science.

To illustrate how science can be wrong take, for example, the 95% rule. Much of science is underpinned by statistics, much of which uses the 95% rule. This rule basically means that when statistical results are obtained, we can only be 95% sure that these results are correct and did not occur by chance (2). This might seem pretty good, but it isn’t. This rule means that 1 in 20 published studies may report results that are wrong.

Bjork, Roos and Lauri (2009) estimated that 1,350,000 scientific articles were published in 2006 (3). Even if only half of these studies used statistics using the 95% rule, this equates to 33,750 articles in which the data is likely wrong. This is a monumental amount of research and, unfortunately, only one way in which science can be bad.

So this is how we are mistaken. We place far too much significance in the findings of individual studies. But scientists are guilty of this too. Both clinicians and researchers rely too heavily on individual studies. I have witnessed Clinical Psychologists, Social Workers, Occupational Therapists, Nurses, and Psychiatrists using individual studies to inform their clinical practice. Similarly, I have read innumerable articles citing a single study as the basis for their own. Unfortunately, not only is this bad for science, it can also be bad for society.

Arguably one of the most damaging results of this over-reliance on individual studies is the vaccination controversy. In 1998, Andrew Wakefield was lead author on a paper suggesting a link between autism, bowel disease and the MMR vaccine. Despite the fact that Wakefield and his work had been discredited (the British General Medical Council found him guilty of ‘dishonesty’ and abuse of his position [4]), his research spurred a tidal wave of new anti-vaccination proponents and campaigns leading to outbreaks of some diseases. All of this based on the misunderstanding of how statistics work. For example, a quick google of the subject sent me to Dr Jeffrey Warber’s website (5). Dr Warber, who claims to have two Bachelor Degrees and two Doctoral Degrees, cites the following graph in one of his articles (original source unknown):

00graph

This graph shows that as MMR vaccinations were introduced, rates of autism have increased. In other words, there is a correlation between MMR vaccinations and autism. What’s is far more likely than a causative relationship is that the methods of diagnosing autism are improving, and thus, so is the diagnosis rate. However, anti-vaccination proponents, often cite correlation graphs as evidence. Let’s be clear, correlations are not evidence. If they were, then we could surmise that organic food also causes autism (6):

02Graph

and ice cream makes people murder other people (7):

03graph

The vaccination controversy demonstrates the effect that one piece of bad research ('bad' being a mild way of stating it) can have dire consequences. It is because of the fallibility of science that individual research studies are not very useful.

We so often hear the media report that new scientific research has demonstrated something marvellous and fascinating. Pop-science appears in newspapers, magazines and even politics. We read what is written, it integrates into our understanding of the world, and we often don’t question it any further (it is, after all, a published piece of research). We are all terribly mistaken.

Information should be informed by consensus: different research studies finding the same result, studies directly replicating other studies. It should be informed by reviews which draws together many sources and weighs up all available research. It should not, however, be informed by one piece of research.

And yet, perhaps surprisingly, researchers who attempt to perform replication studies are often criticised for a lack on ingenuity. Students who conduct replication studies as their dissertation can run the risk of losing marks because there is no ‘novel’ aspect to their research. But replication is vitally important: without knowing with a higher degree of certainty than is afforded by an individual study, that research is right, it is useless. In 2013, a report was published in Nature Reviews boldly suggesting that most neuroscience research findings are false; let down by bad methods and statistics (8). In turn, this suggests that most neuroscience may be wrong.

chrisscience

Change needs to occur quickly. Some attempts to fix these issues have already begun. For example, PLOS (Public Library of Science) recently revised their publication guidelines so that authors must now make all of their data publicly available, rather than just results summaries (9). This will allow readers to be more critical in the authors’findings and interpretations. But an entire change in focus needs to occur. Scientists should be more interested in understanding what’s already there than trying to find something new. After all, scientists base new research on old research.

Some of the blame may in part be due to the competitive need to publish which exists in the discipline of science; or it may be due to biases that exist in favour of publishing significant results. But that’s another story. Perhaps it’s not fair to paint all of science with the same brush. My experience is of the biological-based sciences, rather than chemistry-based or physics-based. But speaking of what I know, we have a long way to go.

Science can be flawed -yet clinicians and researchers still have a tendency to place too much emphasis on individual studies. People too often see black and white, where grey exists. Good methodology, replication and verification are undervalued and perhaps most importantly, individual research studies are so open to error that they are meaningless. This is the uncertainty of science.


(1): http://www.merriam-webster.com/dictionary/science

(2): http://www.stat.yale.edu/Courses/1997-98/101/confint.htm

(3): Bjork, B. C., Roos, A., & Lauri, M. (2009). Scientific journal publishing: Yearly volume and open access availability. Retrieved from http://www.informationr.net/ir/14-1/paper391.html.

(4): http://www.gmc-uk.org/Wakefield_SPM_and_SANCTION.pdf_32595267.pdf

(5): http://www.drjeffhealthcenter.com/ihpages/pages/autism.html#

(6): Dorigo, T. (2012). Correlation, causation, independence. Retrieved from http://www.science20.com/quantum_diaries_survivor/correlation_causation_independence-98944.

(7): Perez, J. Correlation does NOT equal causation. Retrieved from http://badpsychologyblog.org/post/36256730341/correlation-does-not-equal-causation.

(8): Button, K. S., Loannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., et al. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, Retrieved from http://www.montefiore.ulg.ac.be/~lwh/Stats/low-power-neuroscience.pdf.

(9): Silva, L. (2014). PLOS’New Data Policy: Public Access to Data. Retrieved from http://blogs.plos.org/everyone/2014/02/24/plos-new-data-policy-public-access-data-2/.

(10): For an overview, see http://en.wikipedia.org/wiki/Publication