Why is so much bad science published?

It wasn’t until after my retirement that I had the time to read scientific papers in medical journals with anything like close attention. Until then, I had, like most doctors, read the authors’ conclusions and assumed that they bore some necessary relation to what had gone before. I had also naively assumed that the editors had done their job and checked the intellectual coherence and probity of the contents of their journals.

It was only after I started to write a weekly column about the medical journals, and began to read scientific papers from beginning to end, that I realised just how bad — inaccurate, misleading, sloppy, illogical — much of the medical literature, even in the best journals, frequently was. My discovery pleased and reassured me in a way: for it showed me that, even in advancing age, I was still capable of being surprised.

I came to recognise various signs of a bad paper: the kind of paper that purports to show that people who eat more than one kilo of broccoli a week were 1.17 times more likely than those who eat less to suffer late in life from pernicious anaemia. There is a great deal of this kind of nonsense in the medical journals which, when taken up by broadcasters and the lay press, generates both health scares and short-lived dietary enthusiasms.

Why is so much bad science published?

A recent paper, titled ‘The Natural Selection of Bad Science’, published on the Royal Society’s open science website, attempts to answer this intriguing and important question.

According to the authors, the problem is not merely that people do bad science, as they have always done, but that our current system of career advancement positively encourages it. They quote an anonymous researcher who said pithily: ‘Poor methods get results.’ What is important is not truth, let alone importance, but publication, which has become almost an end in itself. There has been a kind of inflationary process at work: nowadays anyone applying for a research post has to have published twice the number of papers that would have been required for the same post only 10 years ago. Never mind the quality, then, count the number. It is at least an objective measure.

In addition to the pressure to publish, there is a preference in journals for positive rather than negative results. To prove that factor a has no effect whatever on outcome b may be important in the sense that it refutes a hypothesis, but it is not half so captivating as that factor a has some marginally positive statistical association with outcome b. It may be an elementary principle of statistics that association is not causation, but in practice everyone forgets it.

The easiest way to generate positive associations is to do bad science, for example by trawling through a whole lot of data without a prior hypothesis. For example, if you took 100 dietary factors and tried to associate them with flat feet, you would find some of them that were associated with that condition, associations so strong that at first sight they would appear not to have arisen by chance.

Once it has been shown that the consumption of, shall we say, red cabbage is associated with flat feet, one of two things can happen: someone will try to reproduce the result, or no one will, in which case it will enter scientific mythology. The penalties for having published results which are not reproducible, and prove before long to be misleading, usually do not cancel out the prestige of having published them in the first place: and therefore it is better, from the career point of view, to publish junk than to publish nothing at all. A long list of publications, all of them valueless, is always impressive.

Attempts have been made to control this inflation, for example by trying, when it comes to career advancement, to incorporate some measure of quality as well as quantity into the assessment of an applicant’s published papers. This is the famed citation index, that is to say the number of times a paper has been quoted elsewhere in the scientific literature, the assumption being that an important paper will be cited more often than one of small account. This would be reasonable enough if it were not for the fact that scientists can easily arrange to cite themselves in their future publications, or get associates to do so for them in return for similar favours.

There is an important law of which government bureaucracies would take cognisance if good government were their aim: that once a method of measurement is used to set a target, it becomes so corrupted that what it measures bears no relation to what it is supposed to measure. The authors of the paper quote Donald T Campbell:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

A further force for corruption is the accelerating overproduction of people with higher degrees compared with the number of opportunities there are to employ them in their field. This increases yet further the pressure to publish, the majority of what is published consequently being of doubtful quality.

The authors of the Royal Society paper are not optimistic about the prospect of improving the quality of research:

Boiling down an individual’s output to simple, objective metrics, such as number of publications or journal impacts, entails considerable savings in time, energy and ambiguity. Unfortunately, the long-term costs of using simple quantitative metrics to assess researcher merit are likely to be quite great.

If we are serious about ensuring that our science is both meaningful and reproducible, we must ensure that our institutions incentivise that kind of science.

In other words, what we need is more emphasis on personal contact and even nepotism in the way careers are advanced: but tell it not in Gath, publish it not in the streets of Askelon; lest the daughters of the Philistines rejoice…


  • shaft120

    Raw KPIs (Key Performance Indicators) are well known in business to be a double edged sword. Whilst they are handy to focus attention, they can be hiddeously manipulated if used as the only regulating oversight. Take for example the Labour NHS shambles under Blair where waiting time KPIs were used and easily enough achieved. That the majority of those people ended up, at best, rushed through a system of reduced service and at worst were misdiagnosed resulting in death, as staff knew what they were being measured on. In the end, you need a time consuming review and appraisal of quality.

  • Type

    Excellent article though I find the last paragraph puzzling. What am I missing?

    • Hugo Hernandez

      2 Samuel 1:20

  • dkural

    Physics has the same pressures, but they can’t publish papers without firm evidence. The Standard Model still reigns supreme with negative result after negative result. The Medical Field has lousier scientific ethics, frankly.

  • Debi Carmi

    delightful to read this, after reading the drivel that E.Ernst calls science.. my faith in this publication is almost restored..

  • AutismDadd

    A good read. Junk science noted.