Bad science

14 Bad Stats

Now that you appreciate the value of statistics—the benefits and risks of intuition—we can look at how these numbers and calculations are repeatedly misused and misunderstood. Our first examples will come from the world of journalism, but the true horror is that journalists are not the only ones to make basic errors of reasoning.
Numbers, as we will see, can ruin lives.
The biggest statistic
Newspapers like big numbers and eye-catching headlines. They need miracle cures and hidden scares, and small percentage shifts in risk will never be enough for them to sell readers to advertisers (because that is the business model). To this end they pick the single most melodramatic and misleading way of describing any statistical increase in risk, which is called the ‘relative risk increase’.
Let’s say the risk of having a heart attack in your fifties is 50 per cent higher if you have high cholesterol. That sounds pretty bad. Let’s say the extra risk of having a heart attack if you have high cholesterol is only 2 per cent. That sounds OK to me. But they’re the same (hypothetical) figures. Let’s try this. Out of a hundred men in their fifties with normal cholesterol, four will be expected to have a heart attack; whereas out of a hundred men with high cholesterol, six will be expected to have a heart attack. That’s two extra heart attacks per hundred. Those are called ‘natural frequencies’.
Natural frequencies are readily understandable, because instead of using probabilities, or percentages, or anything even slightly technical or difficult, they use concrete numbers, just like the ones you use every day to check if you’ve lost a kid on a coach trip, or got the right change in a shop. Lots of people have argued that we evolved to reason and do maths with concrete numbers like these, and not with probabilities, so we find them more intuitive. Simple numbers are simple.
The other methods of describing the increase have names too. From our example above, with high cholesterol, you could have a 50 per cent increase in risk (the ‘relative risk increase’); or a 2 per cent increase in risk (the ‘absolute risk increase’); or, let me ram it home, the easy one, the informative one, an extra two heart attacks for every hundred men, the natural frequency.
As well as being the most comprehensible option, natural frequencies also contain more information than the journalists’ ‘relative risk increase’. Recently, for example, we were told that red meat causes bowel cancer, and ibuprofen increases the risk of heart attacks: but if you followed the news reports, you would be no wiser. Try this, on bowel cancer, from the Today programme on Radio 4: A bigger risk meaning what, Professor Bingham?’ ‘A third higher risk’. ‘That sounds an awful lot, a third higher risk; what are we talking about in terms of numbers here?’ ‘A difference…of around about twenty people per year’. ‘So it’s still a small number?’ ‘Umm…per 10,000…’
These things are hard to communicate if you step outside of the simplest format. Professor Sheila Bingham is Director of the MRC Centre for Nutrition in Cancer Epidemiology Prevention and Survival at the University of Cambridge, and deals with these numbers for a living, but in this (entirely forgivable) fumbling on a live radio show she is not alone: there are studies of doctors, and commissioning committees for local health authorities, and members of the legal profession, which show that people who interpret and manage risk for a living often have huge difficulties expressing what they mean on the spot. They are also much more likely to make the right decision when information about risk is presented as natural frequencies, rather than as probabilities or percentages.
For painkillers and heart attacks, another front-page story, the desperate urge to choose the biggest possible number led to the figures being completely inaccurate, in many newspapers. The reports were based on a study that had observed participants over four years, and the results suggested, using natural frequencies, that you would expect one extra heart attack for every 1,005 people taking ibuprofen. Or as the Daily Mail, in an article titled ‘How Pills for Your Headache Could Kill’, reported: ‘British research revealed that patients taking ibuprofen to treat arthritis face a 24 per cent increased risk of suffering a heart attack.’ Feel the fear.
Almost everyone reported the relative risk increases: diclofenac increases the risk of heart attack by 55 per cent, ibuprofen by 24 per cent. Only the Daily Telegraph and the Evening Standard reported the natural frequencies: one extra heart attack in 1,005 people on ibuprofen. The Mirror, meanwhile, tried and failed, reporting that one in 1,005 people on ibuprofen ‘will suffer heart failure over the following year’. No. It’s heart attack, not heart failure, and it’s one extra person in 1,005, on top of the heart attacks you’d get anyway. Several other papers repeated the same mistake.
Often it’s the fault of the press releases, and academics can themselves be as guilty as the rest when it comes to overdramatising their research (there are excellent best-practice guidelines from the Royal Society on communicating research, if you are interested). But if anyone in a position of power is reading this, here is the information I would like from a newspaper, to help me make decisions about my health, when reporting on a risk: I want to know who you’re talking about (e.g. men in their fifties); I want to know what the baseline risk is (e.g. four men out of a hundred will have a heart attack over ten years); and I want to know what the increase in risk is, as a natural frequency (two extra men out of that hundred will have a heart attack over ten years). I also want to know exactly what’s causing that increase in risk—an occasional headache pill or a daily tub full of pain-relieving medication for arthritis. Then I will consider reading your newspapers again, instead of blogs which are written by people who understand research, and which link reliably back to the original academic paper, so that I can double-check their precis when I wish.
Over a hundred years ago, H.G. Wells said that statistical thinking would one day be as important as the ability to read and write in a modern technological society. I disagree; probabilistic reasoning is difficult for everyone, but everyone understands normal numbers. This is why ‘natural frequencies’ are the only sensible way to communicate risk.
Choosing your figures
Sometimes the mispresentation of figures goes so far beyond reality that you can only assume mendacity. Often these situations seem to involve morality: drugs, abortion and the rest. With very careful selection of numbers, in what some might consider to be a cynical and immoral manipulation of the facts for personal gain, you can sometimes make figures say anything you want.
The Independent was in favour of legalising cannabis for many years, but in March 2007 it decided to change its stance. One option would have been simply to explain this as a change of heart, or a reconsideration of the moral issues. Instead it was decorated with science—as cowardly zealots have done from eugenics through to prohibition—and justified with a fictitious change in the facts. ‘Cannabis—An Apology’ was the headline for their front-page splash.
In 1997, this newspaper launched a campaign to decriminalise the drug. If only we had known then what we can reveal today…Record numbers of teenagers are requiring drug treatment as a result of smoking skunk, the highly potent cannabis strain that is 25 times stronger than resin sold a decade ago.
Twice in this story we are told that cannabis is twenty-five times stronger than it was a decade ago. For the paper’s former editor Rosie Boycott, in her melodramatic recantation, skunk was ‘thirty times stronger’. In one inside feature the strength issue was briefly downgraded to a ‘can be’. The paper even referenced its figures: ‘The Forensic Science Service says that in the early Nineties cannabis would contain around 1 per cent tetrahydro-cannabidinol (THC), the mind-altering compound, but can now have up to 25 per cent.’
This is all sheer fantasy.
I’ve got the Forensic Science Service data right here in front of me, and the earlier data from the Laboratory of the Government Chemist, the United Nations Drug Control Program, and the European Union’s Monitoring Centre for Drugs and Drug Addiction. I’m going to share it with you, because I happen to think that people are very well able to make their own minds up about important social and moral issues when given the facts.
The data from the Laboratory of the Government Chemist goes from 1975 to 1989. Cannabis resin pootles around between 6 per cent and 10 per cent THC, herbal between 4 per cent and 6 per cent. There is no clear trend.
The Forensic Science Service data then takes over to produce the more modern figures, showing not much change in resin,

Mean potency (% THC) of cannabis products examined in the UK (Laboratory of the Government Chemist, 1975-89)

and domestically produced indoor herbal cannabis doubling in potency from 6 per cent to around 12 or 14 per cent. (2003-05 data in table under references).
The rising trend of cannabis potency is gradual, fairly unspectacular, and driven largely by the increased availability of domestic, intensively grown indoor herbal cannabis.

Mean potency (% THC) of cannabis products examined in the UK (Forensic Science Service, 1995-2002)


Mean THC content of cannabis products seized in the UK (Forensic Science Service, 1995-2002)

‘Twenty-five times stronger’, remember. Repeatedly, and on the front page.
If you were in the mood to quibble with the Independent’s moral and political reasoning, as well as its evident and shameless venality, you could argue that intensive indoor cultivation of a plant which grows perfectly well outdoors is the cannabis industry’s reaction to the product’s illegality itself. It is dangerous to import cannabis in large amounts. It is dangerous to be caught growing a field of it. So it makes more sense to grow it intensively indoors, using expensive real estate, but producing a more concentrated drug. More concentrated drugs products are, after all, a natural consequence of illegality. You can’t buy coca leaves in Peckham, although you can buy crack.
There is, of course, exceptionally strong cannabis to be found in some parts of the British market today, but then there always has been. To get its scare figure, the Independent can only have compared the worst cannabis from the past with the best cannabis of today. It’s an absurd thing to do, and moreover you could have cooked the books in exactly the same way thirty years ago if you’d wanted: the figures for individual samples are available, and in 1975 the weakest herbal cannabis analysed was 0.2 per cent THC, while in 1978 the strongest herbal cannabis was 12 per cent. By these figures, in just three years herbal cannabis became ‘sixty times stronger’.
And this scare isn’t even new. In the mid-1980s, during Ronald Reagan’s ‘war on drugs’ and Zammo’s ‘Just say no’ campaign on Grange Hill, American campaigners were claiming that cannabis was fourteen times stronger than in 1970. Which sets you thinking. If it was fourteen times stronger in 1986 than in 1970, and it’s twenty-five times stronger today than at the beginning of the 1990s, does that mean it’s now 350 times stronger than in 1970?
That’s not even a crystal in a plant pot. It’s impossible. It would require more THC to be present in the plant than the total volume of space taken up by the plant itself. It would require matter to be condensed into super-dense quark-gluon-plasma cannabis. For God’s sake don’t tell the Independent such a thing is possible.
Cocaine floods the playground
We are now ready to move on to some more interesting statistical issues, with another story from an emotive area, an article in The Times in March 2006 headed: ‘Cocaine Floods the Playground’. ‘Use of the addictive drug by children doubles in a year,’ said the subheading. Was this true?
If you read the press release for the government survey on which the story is based, it reports ‘almost no change in patterns of drug use, drinking or smoking since 2000’. But this was a government press release, and journalists are paid to investigate: perhaps the press release was hiding something, to cover up for government failures. The Telegraph also ran the ‘cocaine use doubles’ story, and so did the Mirror. Did the journalists find the news themselves, buried in the report?
You can download the full document online. It’s a survey of 9,000 children, aged eleven to fifteen, in 305 schools. The three-page summary said, again, that there was no change in prevalence of drug use. If you look at the full report you will find the raw data tables: when asked whether they had used cocaine in the past year, 1 per cent said yes in 2004, and 2 per cent said yes in 2005.
So the newspapers were right: it doubled? No. Almost all the figures given were 1 per cent or 2 per cent. They’d all been rounded off. Civil servants are very helpful when you ring them up. The actual figures were 1.4 per cent for 2004, and 1.9 per cent for 2005, not 1 per cent and 2 per cent. So cocaine use hadn’t doubled at all. But people were still eager to defend this story: cocaine use, after all, had increased, yes?
No. What we now have is a relative risk increase of 35.7 per cent, or an absolute risk increase of 0.5 per cent. Using the real numbers, out of 9,000 kids we have about forty-five more saying ‘Yes’ to the question ‘Did you take cocaine in the past year?’
Presented with a small increase like this, you have to think: is it statistically significant? I did the maths, and the answer is yes, it is, in that you get a p-value of less than 0.05. What does ‘statistically significant’ mean? It’s just a way of expressing the likelihood that the result you got was attributable merely to chance. Sometimes you might throw ‘heads’ five times in a row, with a completely normal coin, especially if you kept tossing it for long enough. Imagine a jar of 980 blue marbles, and twenty red ones, all mixed up: every now and then—albeit rarely—picking blindfolded, you might pull out three red ones in a row, just by chance. The standard cut-off point for statistical significance is a p-value of 0.05, which is just another way of saying, ‘If I did this experiment a hundred times, I’d expect a spurious positive result on five occasions, just by chance.’
To go back to our concrete example of the kids in the playground, let’s imagine that there was definitely no difference in cocaine use, but you conducted the same survey a hundred times: you might get a difference like the one we have seen here, just by chance, just because you randomly happened to pick up more of the kids who had taken cocaine this time around. But you would expect this to happen less than five times out of your hundred surveys.
So we have a risk increase of 35.7 per cent, which seems at face value to be statistically significant; but it is an isolated figure. To ‘data mine’, taking it out of its real-world context, and saying it is significant, is misleading. The statistical test for significance assumes that every data point is independent, but here the data is ‘clustered’, as statisticians say. They are not data points, they are real children, in 305 schools. They hang out together, they copy each other, they buy drugs from each other, there are crazes, epidemics, group interactions.
The increase of forty-five kids taking cocaine could have been a massive epidemic of cocaine use in one school, or a few groups of a dozen kids in a few different schools, or mini-epidemics in a handful of schools. Or forty-five kids independently sourcing and consuming cocaine alone without their friends, which seems pretty unlikely to me.
This immediately makes our increase less statistically significant. The small increase of 0.5 per cent was only significant because it came from a large sample of 9,000 data points—like 9,000 tosses of a coin—and the one thing almost everyone knows about studies like this is that a bigger sample size means the results are probably more significant. But if they’re not independent data points, then you have to treat it, in some respects, like a smaller sample, so the results become less significant. As statisticians would say, you must ‘correct for clustering’. This is done with clever maths which makes everyone’s head hurt. All you need to know is that the reasons why you must ‘correct for clustering’ are transparent, obvious and easy, as we have just seen (in fact, as with many implements, knowing when to use a statistical tool is a different and equally important skill to understanding how it is built). When you correct for clustering, you greatly reduce the significance of the results. Will our increase in cocaine use, already down from ‘doubled’ to ‘35.7 per cent’, even survive?
No. Because there is a final problem with this data: there is so much of it to choose from. There are dozens of data points in the report: on solvents, cigarettes, ketamine, cannabis, and so on. It is standard practice in research that we only accept a finding as significant if it has a p-value of 0.05 or less. But as we said, a p-value of 0.05 means that for every hundred comparisons you do, five will be positive by chance alone. From this report you could have done dozens of comparisons, and some of them would indeed have shown increases in usage—but by chance alone, and the cocaine figure could be one of those. If you roll a pair of dice often enough, you will get a double six three times in a row on many occasions. This is why statisticians do a ‘correction for multiple comparisons’, a correction for ‘rolling the dice’ lots of times. This, like correcting for clustering, is particularly brutal on the data, and often reduces the significance of findings dramatically.
Data dredging is a dangerous profession. You could—at face value, knowing nothing about how stats works—have said that this government report showed a significant increase of 35.7 per cent in cocaine use. But the stats nerds who compiled it knew about clustering, and Bonferroni’s correction for multiple comparisons. They are not stupid; they do stats for a living.
That, presumably, is why they said quite clearly in their summary, in their press release and in the full report that there was no change from 2004 to 2005. But the journalists did not want to believe this: they tried to re-interpret the data for themselves, they looked under the bonnet, and they thought they’d found the news. The increase went from 0.5 per cent—a figure that might be a gradual trend, but it could equally well be an entirely chance finding—to a front-page story in The Times about cocaine use doubling. You might not trust the press release, but if you don’t know about numbers, then you take a big chance when you delve under the bonnet of a study to find a story.
OK, back to an easy one
There are also some perfectly simple ways to generate ridiculous statistics, and two common favourites are to select an unusual sample group, and to ask them a stupid question. Let’s say 70 per cent of all women want Prince Charles to be told to stop interfering in public life. Oh, hang on—70 per cent of all women who visit my website want Prince Charles to be told to stop interfering in public life. You can see where we’re going. And of course, in surveys, if they are voluntary, there is something called selection bias: only the people who can be bothered to fill out the survey form will actually have a vote registered.
There was an excellent example of this in the Telegraph in the last days of 2007. ‘Doctors Say No to Abortions in their Surgeries’ was the headline. ‘Family doctors are threatening a revolt against government plans to allow them to perform abortions in their surgeries, the Daily Telegraph can disclose.’ A revolt? ‘Four out of five GPs do not want to carry out terminations even though the idea is being tested in NHS pilot schemes, a survey has revealed.’
Where did these figures come from? A systematic survey of all GPs, with lots of chasing to catch the non-responders? Telephoning them at work? A postal survey, at least? No. It was an online vote on a doctors’ chat site that produced this major news story. Here is the question, and the options given:
‘GPs should carry out abortions in their surgeries’
Strongly agree, agree, don’t know, disagree, strongly disagree.
We should be clear: I myself do not fully understand this question. Is that ‘should’ as in ‘should’? As in ‘ought to’? And in what circumstances? With extra training, time and money? With extra systems in place for adverse outcomes? And remember, this is a website where doctors—bless them—go to moan. Are they just saying no because they’re grumbling about more work and low morale?
More than that, what exactly does ‘abortion’ mean here? Looking at the comments in the chat forum, I can tell you that plenty of the doctors seemed to think it was about surgical abortions, not the relatively safe oral pill for termination of pregnancy. Doctors aren’t that bright, you see. Here are some quotes:
This is a preposterous idea. How can GPs ever carry out abortions in their own surgeries. What if there was a major complication like uterine and bowel perforation?
GP surgeries are the places par excellence where infective disorders present. The idea of undertaking there any sort of sterile procedure involving an abdominal organ is anathema.
The only way it would or rather should happen is if GP practices have a surgical day care facility as part of their premises which is staffed by appropriately trained staff, i.e. theatre staff, anaesthetist and gynaecologist…any surgical operation is not without its risks, and presumably [we] will undergo gynaecological surgical training in order to perform.
What are we all going on about? Let’s all carry out abortions in our surgeries, living rooms, kitchens, garages, corner shops, you know, just like in the old days.
And here’s my favourite:
I think that the question is poorly worded and I hope that [the doctors’ website] do not release the results of this poll to the Daily Telegraph.
Beating you up
It would be wrong to assume that the kinds of oversights we’ve covered so far are limited to the lower echelons of society, like doctors and journalists. Some of the most sobering examples come from the very top.
In 2006, after a major government report, the media reported that one murder a week is committed by someone with psychiatric problems. Psychiatrists should do better, the newspapers told us, and prevent more of these murders. All of us would agree, I’m sure, with any sensible measure to improve risk management and violence, and it’s always timely to have a public debate about the ethics of detaining psychiatric patients (although in the name of fairness I’d like to see preventive detention discussed for all other potentially risky groups too—like alcoholics, the repeatedly violent, people who have abused staff in the job centre, and so on).
But to engage in this discussion, you need to understand the maths of predicting very rare events. Let’s take a very concrete example, and look at the HIV test. What features of any diagnostic procedure do we measure in order to judge how useful it might be? Statisticians would say the blood test for HIV has a very high ‘sensitivity’, at 0.999. That means that if you do have the virus, there is a 99.9 per cent chance that the blood test will be positive. They would also say the test has a high ‘specificity’ of 0.9999—so, if you are not infected, there is a 99.99 per cent chance that the test will be negative. What a smashing blood test.*
≡ The figures here are ballpark, from Gerd Gigerenzer’s excellent book Reckoning with Risk.

But if you look at it from the perspective of the person being tested, the maths gets slightly counterintuitive. Because weirdly, the meaning, the predictive value, of an individual’s positive or negative test is changed in different situations, depending on the background rarity of the event that the test is trying to detect. The rarer the event in your population, the worse your test becomes, even though it is the same test.
This is easier to understand with concrete figures. Let’s say the HIV infection rate among high-risk men in a particular area is 1.5 per cent. We use our excellent blood test on 10,000 of these men, and we can expect 151 positive blood results overall: 150 will be our truly HIV-positive men, who will get true positive blood tests; and one will be the one false positive we could expect from having 10,000 HIV-negative men being given a test that is wrong one time in 10,000. So, if you get a positive HIV blood test result, in these circumstances your chances of being truly HIV positive are 150 out of 151. It’s a highly predictive test.
Let’s now use the same test where the background HIV infection rate in the population is about one in 10,000. If we test 10,000 people, we can expect two positive blood results overall. One from the person who really is HIV positive; and the one false positive that we could expect, again, from having 10,000 HIV-negative men being tested with a test that is wrong one time in 10,000.
Suddenly, when the background rate of an event is rare, even our previously brilliant blood test becomes a bit rubbish. For the two men with a positive HIV blood test result, in this population where only one in 10,000 has HIV, it’s only 50:50 odds on whether they really are HIV positive.
Let’s think about violence. The best predictive tool for psychiatric violence has a ‘sensitivity’ of 0.75, and a ‘specificity’ of 0.75. It’s tougher to be accurate when predicting an event in humans, with human minds and changing human lives. Let’s say 5 per cent of patients seen by a community mental health team will be involved in a violent event in a year. Using the same maths as we did for the HIV tests, your ‘0.75’ predictive tool would be wrong eighty-six times out of a hundred. For serious violence, occurring at 1 per cent a year, with our best ‘0.75’ tool, you inaccurately finger your potential perpetrator ninety-seven times out of a hundred. Will you preventively detain ninety-seven people to prevent three violent events? And will you apply that rule to alcoholics and assorted nasty antisocial types as well?
For murder, the extremely rare crime in question in this report, for which more action was demanded, occurring at one in 10,000 a year among patients with psychosis, the false positive rate is so high that the best predictive test is entirely useless.
This is not a counsel of despair. There are things that can be done, and you can always try to reduce the number of actual stark cock-ups, although it’s difficult to know what proportion of the ‘one murder a week’ represents a clear failure of a system, since when you look back in history, through the retrospecto-scope, anything that happens will look as if it was inexorably leading up to your one bad event. I’m just giving you the maths on rare events. What you do with it is a matter for you.
Locking you up
In 1999 solicitor Sally Clark was put on trial for murdering her two babies. Most people are aware that there was a statistical error in the prosecution case, but few know the true story, or the phenomenal extent of the statistical ignorance that went on in the case.
At her trial, Professor Sir Roy Meadow, an expert in parents who harm their children, was called to give expert evidence. Meadow famously quoted ‘one in seventy-three million’ as the chance of two children in the same family dying of Sudden Infant Death Syndrome (SIDS).
This was a very problematic piece of evidence for two very distinct reasons: one is easy to understand, the other is an absolute mindbender. Because you have the concentration span to follow the next two pages, you will come out smarter than Professor Sir Roy, the judge in the Sally Clark case, her defence teams, the appeal court judges, and almost all the journalists and legal commentators reporting on the case. We’ll do the easy reason first.
The ecological fallacy
The figure of ‘one in seventy-three million’ itself is iffy, as everyone now accepts. It was calculated as 8,543 x 8,543, as if the chances of two SIDS episodes in this one family were independent of each other. This feels wrong from the outset, and anyone can see why: there might be environmental or genetic factors at play, both of which would be shared by the two babies. But forget how pleased you are with yourself for understanding that fact. Even if we accept that two SIDS in one family is much more likely than one in seventy-three million—say, one in 10,000—any such figure is still of dubious relevance, as we will now see.
The prosecutor’s fallacy
The real question in this case is: what do we do with this spurious number? Many press reports at the time stated that one in seventy-three million was the likelihood that the deaths of Sally Clark’s two children were accidental: that is, the likelihood that she was innocent. Many in the court process seemed to share this view, and the factoid certainly sticks in the mind. But this is an example of a well-known and well-documented piece of flawed reasoning known as ‘the prosecutor’s fallacy’.
Two babies in one family have died. This in itself is very rare. Once this rare event has occurred, the jury needs to weigh up two competing explanations for the babies’ deaths: double SIDS or double murder. Under normal circumstances—before any babies have died—double SIDS is very unlikely, and so is double murder. But now that the rare event of two babies dying in one family has occurred, the two explanations—double murder or double SIDS—are suddenly both very likely. If we really wanted to play statistics, we would need to know which is relatively more rare, double SIDS or double murder. People have tried to calculate the relative risks of these two events, and one paper says it comes out at around 2:1 in favour of double SIDS.
Not only was this crucial nuance of the prosecutor’s fallacy missed at the time—by everyone in the court—it was also clearly missed in the appeal, at which the judges suggested that instead of ‘one in seventy-three million’, Meadow should have said ‘very rare’. They recognised the flaws in its calculation, the ecological fallacy, the easy problem above, but they still accepted his number as establishing ‘a very broad point, namely the rarity of double SIDS’.
That, as you now understand, was entirely wrongheaded: the rarity of double SIDS is irrelevant, because double murder is rare too. An entire court process failed to spot the nuance of how the figure should be used. Twice.
Meadow was foolish, and has been vilified (some might say this process was exacerbated by the witch-hunt against paediatricians who work on child abuse), but if it is true that he should have spotted and anticipated the problems in the interpretation of his number, then so should the rest of the people involved in the case: a paediatrician has no more unique responsibility to be numerate than a lawyer, a judge, journalist, jury member or clerk. The prosecutor’s fallacy is also highly relevant in DNA evidence, for example, where interpretation frequently turns on complex mathematical and contextual issues. Anyone who is going to trade in numbers, and use them, and think with them, and persuade with them, let alone lock people up with them, also has a responsibility to understand them. All you’ve done is read a popular science book on them, and already you can see it’s hardly rocket science.
Losing the lottery
You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing…
Richard Feynman
It is possible to be very unlucky indeed. A nurse called Lucia de Berk has been in prison for six years in Holland, convicted of seven counts of murder and three of attempted murder. An unusually large number of people died when she was on shift, and that, essentially, along with some very weak circumstantial evidence, is the substance of the case against her. She has never confessed, she has continued to protest her innocence, and her trial has generated a small collection of theoretical papers in the statistics literature.
The judgement was largely based on a figure of ‘one in 342 million against’. Even if we found errors in this figure—and believe me, we will—as in our previous story, the figure itself would still be largely irrelevant. Because, as we have already seen repeatedly, the interesting thing about statistics is not the tricky maths, but what the numbers mean.
There is also an important lesson here from which we could all benefit: unlikely things do happen. Somebody wins the lottery every week; children are struck by lightning. It’s only weird and startling when something very, very specific and unlikely happens if you have specifically predicted it beforehand.*
≡ The magician and pseudoscience debunker James Randi used to wake up every morning and write on a card in his pocket: ‘I, James Randi, will die today’, followed by the date and his signature. Just in case, he has recently explained, he really did, by some completely unpredictable accident.

Here is an analogy.
Imagine I am standing near a large wooden barn with an enormous machine gun. I place a blindfold over my eyes and—laughing maniacally—I fire off many thousands and thousands of bullets into the side of the barn. I then drop the gun, walk over to the wall, examine it closely for some time, all over, pacing up and down. I find one spot where there are three bullet holes close to each other, then draw a target around them, announcing proudly that I am an excellent marksman.
You would, I think, disagree with both my methods and my conclusions for that deduction. But this is exactly what has happened in Lucia’s case: the prosecutors found seven deaths, on one nurse’s shifts, in one hospital, in one city, in one country, in the world, and then drew a target around them.
This breaks a cardinal rule of any research involving statistics: you cannot find your hypothesis in your results. Before you go to your data with your statistical tool, you have to have a specific hypothesis to test. If your hypothesis comes from analysing the data, then there is no sense in analysing the same data again to confirm it.
This is a rather complex, philosophical, mathematical form of circularity: but there were also very concrete forms of circular reasoning in the case To collect more data, the investigators went back to the wards to see if they could find more suspicious deaths. But all the people who were asked to remember ‘suspicious incidents’ knew that they were being asked because Lucia might be a serial killer. There was a high risk that ‘an incident was suspicious’ became synonymous with ‘Lucia was present’. Some sudden deaths when Lucia was not present would not be listed in the calculations, by definition: they are in no way suspicious, because Lucia was not present.
It gets worse. ‘We were asked to make a list of incidents that happened during or shortly after Lucia’s shifts,’ said one hospital employee. In this manner more patterns were unearthed, and so it became even more likely that investigators would find more suspicious deaths on Lucia’s shifts. Meanwhile, Lucia waited in prison for her trial.
This is the stuff of nightmares.
At the same time, a huge amount of corollary statistical information was almost completely ignored. In the three years before Lucia worked on the ward in question, there were seven deaths. In the three years that she did work on the ward, there were six deaths. Here’s a thought: it seems odd that the death rate should go down on a ward at the precise moment that a serial killer—on a killing spree—arrives. If Lucia killed them all, then there must have been no natural deaths on that ward at all in the whole of the three years that she worked there.
Ah, but on the other hand, as the prosecution revealed at her trial, Lucia did like tarot. And she does sound a bit weird in her private diary, excerpts from which were read out. So she might have done it anyway.
But the strangest thing of all is this. In generating his obligatory, spurious, Meadowesque figure—which this time was ‘one in 342 million’—the prosecution’s statistician made a simple, rudimentary mathematical error. He combined individual statistical tests by multiplying p-values, the mathematical description of chance, or statistical significance. This bit’s for the hardcore science nerds, and will be edited out by the publisher, but I intend to write it anyway: you do not just multiply p-values together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent p-values’.
If you multiply p-values together, then harmless and probable incidents rapidly appear vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a harmless incident pattern: say p=0.5. If you multiply those harmless p-values, of entirely chance findings, you end up with a final p-value of 0.5 to the power of twenty, which is p < 0.000001, which is extremely, very, highly statistically significant. With this mathematical error, by his reasoning, if you change hospitals a lot, you automatically become a suspect. Have you worked in twenty hospitals? For God’s sake don’t tell the Dutch police if you have.



Ben Goldacre's books