Bad science

11 Is Mainstream Medicine Evil?

So that was the alternative therapy industry. Its practitioners’ claims are made directly to the public, so they have greater cultural currency; and while they use the same tricks of the trade as the pharmaceutical industry—as we have seen along the way—their strategies and errors are more transparent, so they make for a neat teaching tool. Now, once again, we should raise our game.
For this chapter you will also have to rise above your own narcissism. We will not be talking about the fact that your GP is sometimes rushed, or that your consultant was rude to you. We will not be talking about the fact that nobody could work out what was wrong with your knee, and we will not even be discussing the time that someone misdiagnosed your grandfather’s cancer, and he suffered unnecessarily for months before a painful, bloody, undeserved and undignified death at the end of a productive and loving life.
Terrible things happen in medicine, when it goes right as well as when it goes wrong. Everybody agrees that we should work to minimise the errors, everybody agrees that doctors are sometimes terrible; if the subject fascinates you, then I encourage you to buy one of the libraries’ worth of books on clinical governance. Doctors can be awful, and mistakes can be murderous, but the philosophy driving evidence-based medicine is not. How well does it work?
One thing you could measure is how much medical practice is evidence-based. This is not easy. From the state of current knowledge, around 13 per cent of all treatments have good evidence, and a further 21 per cent are likely to be beneficial. This sounds low, but it seems the more common treatments tend to have a better evidence base. Another way of measuring is to look at how much medical activity is evidence-based, taking consecutive patients, in a hospital outpatients clinic for example, looking at their diagnosis, what treatment they were given, and then looking at whether that treatment decision was based on evidence. These real-world studies give a more meaningful figure: lots were done in the 1990s, and it turns out, depending on speciality, that between 50 and 80 per cent of all medical activity is ‘evidence-based’. It’s still not great, and if you have any ideas on how to improve that, do please write about it.*
≡ I have argued on various occasions that, wherever possible, all treatment where there is uncertainty should be randomised, and in the NHS we are theoretically in a unique administrative position to be able to facilitate this, as a gift to the world. For all that you may worry about some of its decisions, the National Institute for Health and Clinical Excellence (NICE) has also had the clever idea of recommending that some treatments—where there is uncertainty about benefit—should only be funded by the NHS when given in the context of a trial (an ‘Only in Research’ approval). NICE is frequently criticised—it’s a political body after all—for not recommending that the NHS funds apparently promising treatments. But acquiescing and funding a treatment when it is uncertain whether it does more good than harm is dangerous, as has been dramatically illustrated by various cases where promising treatments turned out ultimately to do more harm than good. We failed for decades to address uncertainties about the benefits of steroids for patients with brain injury: the CRASH trial showed that tens of thousands of people have died unnecessarily, because in fact they do more harm than good. In medicine, information saves lives.

Another good measure is what happens when things go wrong. The British Medical Journal is probably the most important medical journal in the UK. It recently announced the three most popular papers from its archive for 2005, according to an audit that assessed their use by readers, the number of times they were referenced by other academic papers, and so on. Each of these papers had a criticism of either a drug, a drug company or a medical activity as its central theme.
We can go through them briefly, so you can see for yourself how relevant the biggest papers from the most important medical journal are to your needs. The top-scoring paper was a case-control study which showed that patients had a higher risk of heart attack if they were taking the drugs rofecoxib (Vioxx), diclofenac or ibuprofen. At number two was a large meta-analysis of drug company data, which showed no evidence that SSRI antidepressants increase the risk of suicide, but found weak evidence for an increased risk of deliberate self-harm. In third place was a systematic review which showed an association between suicide attempts and the use of SSRIs, and critically highlighted some of the inadequacies around the reporting of suicides in clinical trials.
This is critical self-appraisal, and it is very healthy, but you will notice something else: all of those studies revolve around situations where drug companies withheld or distorted evidence. How does this happen?
The pharmaceutical industry

The tricks of the trade which we’ll discuss in this chapter are probably more complicated than most of the other stuff in the book, because we will be making technical critiques of an industry’s professional literature. Drug companies thankfully don’t advertise direct to the public in the UK—in America you can find them advertising anxiety pills for your dog—so we are pulling apart the tricks they play on doctors, an audience which is in a slightly better position to call their bluff. This means that we’ll first have to explain some background about how a drug comes to market. This is stuff that you will be taught at school when I become president of the one world government.
Understanding this process is important for one very clear reason. It seems to me that a lot of the stranger ideas people have about medicine derive from an emotional struggle with the very notion of a pharmaceutical industry. Whatever our political leanings, everyone is basically a socialist when it comes to healthcare: we all feel nervous about profit taking any role in the caring professions, but that feeling has nowhere to go. Big pharma is evil: I would agree with that premise. But because people don’t understand exactly how big pharma is evil, their anger and indignation get diverted away from valid criticisms—its role in distorting data, for example, or withholding life-saving AIDS drugs from the developing world—and channelled into infantile fantasies. ‘Big pharma is evil,’ goes the line of reasoning, ‘therefore homeopathy works and the MMR vaccine causes autism.’ This is probably not helpful.
In the UK, the pharmaceutical industry has become the third most profitable activity after finance and—a surprise if you live here—tourism. We spend £7 billion a year on pharmaceutical drugs, and 80 per cent of that goes on patented drugs, medicines which were released in the last ten years. Globally, the industry is worth around £150 billion.
People come in many flavours, but all corporations have a duty to maximise their profits, and this often sits uncomfortably with the notion of caring for people. An extreme example comes with AIDS: as I mentioned in passing, drug companies explain that they cannot give AIDS drugs off licence to developing-world countries, because they need the money from sales for research and development. And yet, of the biggest US companies’ $200 billion sales, they spend only 14 per cent on R&D, compared to 31 per cent on marketing and administration.
The companies also set their prices in ways you might judge to be exploitative. Once your drug comes out, you have around ten years ‘on patent’, as the only person who is allowed to make it. Loratadine, produced by Schering-Plough, is an effective antihistamine drug that does not cause the unpleasant antihistamine side-effect of drowsiness. It was a unique treatment for a while, and highly in demand. Before the patent ran out, the price of the drug was raised thirteen times in just five years, increasing by over 50 per cent. Some might regard this as profiteering.
But the pharmaceutical industry is also currently in trouble. The golden age of medicine has creaked to a halt, as we have said, and the number of new drugs, or ‘new molecular entities’, being registered has dwindled from fifty a year in the 1990s to about twenty now. At the same time, the number of ‘me-too’ drugs has risen, making up to half of all new drugs.
Me-too drugs are an inevitable function of the market: they are rough copies of drugs that already exist, made by another company, but are different enough for a manufacturer to be able to claim their own patent. They take huge effort to produce, and need to be tested (on human participants, with all the attendant risks) and trialled and refined and marketed just like a new drug. Sometimes they offer modest benefits (a more convenient dosing regime, for example), but for all the hard work they involve, they don’t generally represent a significant breakthrough in human health. They are merely a breakthrough in making money. Where do all these drugs come from?
The journey of a drug
First of all, you need an idea for a drug. This can come from any number of places: a molecule in a plant; a receptor in the body that you think you can build a molecule to interface with; an old drug that you’ve tinkered with; and so on. This part of the story is extremely interesting, and I recommend doing a degree in it. When you think you have a molecule that might be a runner, you test it in animals, to see if it works for whatever you think it should do (and to see if it kills them, of course).
Then you do Phase I, or ‘first in man’, studies on a small number of brave, healthy young men who need money, firstly to see if it kills them, and also to measure basic things like how fast the drug is excreted from the body (this is the phase that went horribly wrong in the TGN1412 tests in 2006, where several young men were seriously injured). If this works, you move to a Phase II trial, in a couple of hundred people with the relevant illness, as a ‘proof of concept’, to work out the dose, and to get an idea if it is effective or not. A lot of drugs fail at this point, which is a shame, since this is no GCSE science project: bringing a drug to market costs around $500 million in total.
Then you do a Phase III trial, in hundreds or thousands of patients, randomised, blinded, comparing your drug against placebo or a comparable treatment, and collect much more data on efficacy and safety. You might need to do a few of these, and then you can apply for a licence to sell your drug. After it goes to market, you should be doing more trials, and other people will probably do trials and other studies on your drug too; and hopefully everyone will keep their eyes open for any previously unnoticed side-effects, ideally reporting them using the Yellow Card system (patients can use this too; in fact, please do. It’s at http:??yellowcard.mhra.gov.uk).
Doctors make their rational decision on whether they want to prescribe a drug based on how good it has been shown to be in trials, how bad the side-effects are, and sometimes cost. Ideally they will get their information on efficacy from studies published in peer-reviewed academic journals, or from other material like textbooks and review articles which are themselves based on primary research like trials. At worst, they will rely on the lies of drug reps and word of mouth.
But drug trials are expensive, so an astonishing 90 per cent of clinical drug trials, and 70 per cent of trials reported in major medical journals, are conducted or commissioned by the pharmaceutical industry. A key feature of science is that findings should be replicated, but if only one organisation is doing the funding, then this feature is lost.
It is tempting to blame the drug companies—although it seems to me that nations and civic organisations are equally at fault here for not coughing up—but wherever you draw your own moral line, the upshot is that drug companies have a huge influence over what gets researched, how it is researched, how the results are reported, how they are analysed, and how they are interpreted.
Sometimes whole areas can be orphaned because of a lack of money, and corporate interest. Homeopaths and vitamin pill quacks would tell you that their pills are good examples of this phenomenon. That is a moral affront to the better examples. There are conditions which affect a small number of people, like Creutzfeldt-Jakob disease and Wilson disease, but more chilling are the diseases which are neglected because they are only found in the developing world, like Chagas disease (which threatens a quarter of Latin America) and trypanosomiasis (300,000 cases a year, but in Africa). The Global Forum for Health Research estimates that only 10 per cent of the world’s health burden receives 90 per cent of total biomedical research funding.
Often it is simply information that is missing, rather than some amazing new molecule. Eclampsia, say, is estimated to cause 50,000 deaths in pregnancy around the world each year, and the best treatment, by a huge margin, is cheap, unpatented, magnesium sulphate (high doses intravenously, that is, not some alternative medicine supplement, but also not the expensive anti-convulsants that were used for many decades). Although magnesium had been used to treat eclampsia since 1906, its position as the best treatment was only established a century later in 2002, with the help of the World Health Organisation, because there was no commercial interest in the research question: nobody has a patent on magnesium, and the majority of deaths from eclampsia are in the developing world. Millions of women have died of the condition since 1906, and many of those deaths were avoidable.
To an extent these are political and development issues, which we should leave for another day; and I have a promise to pay out on: you want to be able to take the skills you’ve learnt about levels of evidence and distortions of research, and understand how the pharmaceutical industry distorts data, and pulls the wool over our eyes. How would we go about proving this? Overall, it’s true, drug company trials are much more likely to produce a positive outcome for their own drug. But to leave it there would be weak-minded.
What I’m about to tell you is what I teach medical students and doctors—here and there—in a lecture I rather childishly call ‘drug company bullshit’. It is, in turn, what I was taught at medical school,* and I think the easiest way to understand the issue is to put yourself in the shoes of a big pharma researcher.
≡ In this subject, like many medics of my generation, I am indebted to the classic textbook How to Read a Paper by Professor Greenhalgh at UCL. It should be a best-seller. Testing Treatments by Imogen Evans, Hazel Thornton and Iain Chalmers is also a work of great genius, appropriate for a lay audience, and amazingly also free to download online from www.jameslindlibrary.org. For committed readers I recommend Methodological Errors in Medical Research by Bjorn Andersen. It’s extremely long. The subtitle is ‘An Incomplete Catalogue’.

You have a pill. It’s OK, maybe not that brilliant, but a lot of money is riding on it. You need a positive result, but your audience aren’t homeopaths, journalists or the public: they are doctors and academics, so they have been trained in spotting the obvious tricks, like ‘no blinding’, or ‘inadequate randomisation’. Your sleights of hand will have to be much more elegant, much more subtle, but every bit as powerful.
What can you do?
Well, firstly, you could study it in winners. Different people respond differently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement. So only study your drug in the latter group. This will make your research much less applicable to the actual people that doctors are prescribing for, but hopefully they won’t notice. This is so commonplace it is hardly worth giving an example.
Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should never compare your drug against placebo, because it proves nothing of clinical value: in the real world, nobody cares if your drug is better than a sugar pill; they only care if it is better than the best currently available treatment. But you’ve already spent hundreds of millions of dollars bringing your drug to market, so stuff that: do lots of placebo-controlled trials and make a big fuss about them, because they practically guarantee some positive data. Again, this is universal, because almost all drugs will be compared against placebo at some stage in their lives, and ‘drug reps’—the people employed by big pharma to bamboozle doctors (many simply refuse to see them)—love the unambiguous positivity of the graphs these studies can produce.
Then things get more interesting. If you do have to compare your drug with one produced by a competitor—to save face, or because a regulator demands it—you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well; or give a very high dose of the competing drug, so that patients experience lots of side-effects; or give the competing drug in the wrong way (perhaps orally when it should be intravenous, and hope most readers don’t notice); or you could increase the dose of the competing drug much too quickly, so that the patients taking it get worse side-effects. Your drug will shine by comparison.
You might think no such thing could ever happen. If you follow the references in the back, you will find studies where patients were given really rather high doses of old–fashioned antipsychotic medication (which made the new-generation drugs look as if they were better in terms of side-effects), and studies with doses of SSRI antidepressants which some might consider unusual, to name just a couple of examples. I know. It’s slightly incredible.
Of course, another trick you could pull with side-effects is simply not to ask about them; or rather—since you have to be sneaky in this field—you could be careful about how you ask. Here is an example. SSRI antidepressant drugs cause sexual side-effects fairly commonly, including anorgasmia. We should be clear (and I’m trying to phrase this as neutrally as possible): I really enjoy the sensation of orgasm. It’s important to me, and everything I experience in the world tells me that this sensation is important to other people too. Wars have been fought, essentially, for the sensation of orgasm. There are evolutionary psychologists who would try to persuade you that the entirety of human culture and language is driven, in large part, by the pursuit of the sensation of orgasm. Losing it seems like an important side-effect to ask about.
And yet, various studies have shown that the reported prevalence of anorgasmia in patients taking SSRI drugs varies between 2 per cent and 7.3 per cent, depending primarily on how you ask: a casual, open-ended question about side-effects, for example, or a careful and detailed enquiry. One 3,000-subject review on SSRIs simply did not list any sexual side-effects on its twenty-three-item side-effect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.
But back to the main outcomes. And here is a good trick: instead of a real-world outcome, like death or pain, you could always use a ‘surrogate outcome’, which is easier to attain. If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths, measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and quicker to do, so your result will be cheaper and more positive. Result!
Now you’ve done your trial, and despite your best efforts things have come out negative. What can you do? Well, if your trial has been good overall, but has thrown out a few negative results, you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph. Mention it briefly in the text, and ignore it when drawing your conclusions. (I’m so good at this I scare myself. Comes from reading too many rubbish trials.)
If your results are completely negative, don’t publish them at all, or publish them only after a long delay. This is exactly what the drug companies did with the data on SSRI antidepressants: they hid the data suggesting they might be dangerous, and they buried the data showing them to perform no better than placebo. If you’re really clever, and have money to burn, then after you get disappointing data, you could do some more trials with the same protocol, in the hope that they will be positive: then try to bundle all the data up together, so that your negative data is swallowed up by some mediocre positive results.
Or you could get really serious, and start to manipulate the statistics. For two pages only, this book will now get quite nerdy.
I understand if you want to skip it, but know that it is here for the doctors who bought the book to laugh at homeopaths. Here are the classic tricks to play in your statistical analysis to make sure your trial has a positive result.
Ignore the protocol entirely
Always assume that any correlation proves causation. Throw all your data into a spreadsheet programme and report—as significant—any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive just by sheer luck.
Play with the baseline
Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis.
Ignore dropouts
People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side-effects. They will only make your drug look bad. So ignore them, make no attempt to chase them up, do not include them in your final analysis.
Clean up the data
Look at your graphs. There will be some anomalous ‘outliers’, or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.
‘The best of five…no…seven…no…nine!’
If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are ‘nearly significant’, extend the trial by another three months.
Torture the data
If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. ‘Torture the data and it will confess to anything,’ as they say at Guantanamo Bay.
Try every button on the computer
If you’re really desperate, and analysing your data the way you planned does not give you the result you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.
And when you’re finished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written and edited entirely by the industry): remember, the tricks we have just described hide nothing, and will be obvious to anyone who reads your paper, but only if they read it very attentively, so it’s in your interest to make sure it isn’t read beyond the abstract. Finally, if your finding is really embarrassing, hide it away somewhere and cite ‘data on file’. Nobody will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully, that won’t be for ages.
How can this be possible?
When I explain this abuse of research to friends from outside medicine and academia, they are rightly amazed. ‘How can this be possible?’ they say. Well, firstly, much bad research comes down to incompetence. Many of the methodological errors described above can come about by wishful thinking, as much as mendacity. But is it possible to prove foul play?
On an individual level, it is sometimes quite hard to show that a trial has been deliberately rigged to give the right answer for its sponsors. Overall, however, the picture emerges very clearly. The issue has been studied so frequently that in 2003 a systematic review found thirty separate studies looking at whether funding in various groups of trials affected the findings. Overall, studies funded by a pharmaceutical company were found to be four times more likely to give results that were favourable to the company than independent studies.
One review of bias tells a particularly Alice in Wonderland story. Fifty-six different trials comparing painkillers like ibuprofen, diclofenac and so on were found. People often invent new versions of these drugs in the hope that they might have fewer side-effects, or be stronger (or stay in patent and make money). In every single trial the sponsoring manufacturer’s drug came out as better than, or equal to, the others in the trial. On not one occasion did the manufacturer’s drug come out worse. Philosophers and mathematicians talk about ‘transitivity’: if A is better than B, and B is better than C, then C cannot be better than A. To put it bluntly, this review of fifty-six trials exposed a singular absurdity: all of these drugs were better than each other.
But there is a surprise waiting around the corner. Astonishingly, when the methodological flaws in studies are examined, it seems that industry-funded trials actually turn out to have better research methods, on average, than independent trials.
The most that could be pinned on the drug companies were some fairly trivial howlers: things like using inadequate doses of the competitor’s drug (as we said above), or making claims in the conclusions section of the paper that exaggerated a positive finding. But these, at least, were transparent flaws: you only had to read the trial to see that the researchers had given a miserly dose of a painkiller; and you should always read the methods and results section of a trial to decide what its findings are, because the discussion and conclusion pages at the end are like the comment pages in a newspaper. They’re not where you get your news from.
How can we explain, then, the apparent fact that industry funded trials are so often so glowing? How can all the drugs simultaneously be better than all of the others? The crucial kludge may happen after the trial is finished.
Publication bias and suppressing negative results
‘Publication bias’ is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones. It’s easy enough to understand, if you put yourself in the shoes of the researcher. Firstly, when you get a negative result, it feels as if it’s all been a bit of a waste of time. It’s easy to convince yourself that you found nothing, when in fact you discovered a very useful piece of information: that the thing you were testing doesn’t work.
Rightly or wrongly, finding out that something doesn’t work probably isn’t going to win you a Nobel Prize—there’s no justice in the world—so you might feel demotivated about the project, or prioritise other projects ahead of writing up and submitting your negative finding to an academic journal, and so the data just sits, rotting, in your bottom drawer. Months pass. You get a new grant. The guilt niggles occasionally, but Monday’s your day in clinic, so Tuesday’s the beginning of the week really, and there’s the departmental meeting on Wednesday, so Thursday’s the only day you can get any proper work done, because Friday’s your teaching day, and before you know it, a year has passed, your supervisor retires, the new guy doesn’t even know the experiment ever happened, and the negative trial data is forgotten forever, unpublished. If you are smiling in recognition at this paragraph, then you are a very bad person.
Even if you do get around to writing up your negative finding, it’s hardly news. You’re probably not going to get it into a big-name journal, unless it was a massive trial on something everybody thought was really whizzbang until your negative trial came along and blew it out of the water, so as well as this being a good reason for you not bothering, it also means the whole process will be heinously delayed: it can take a year for some of the slacker journals to reject a paper. Every time you submit to a different journal you might have to re-format the references (hours of tedium). If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent: that’s years of people not knowing about your study.
Publication bias is common, and in some fields it is more rife than in others. In 1995, only 1 per cent of all articles published in alternative medicine journals gave a negative result. The most recent figure is 5 per cent negative. This is very, very low, although to be fair, it could be worse. A review in 1998 looked at the entire canon of Chinese medical research, and found that not one single negative trial had ever been published. Not one. You can see why I use CAM as a simple teaching tool for evidence-based medicine.
Generally the influence of publication bias is more subtle, and you can get a hint that publication bias exists in a field by doing something very clever called a funnel plot. This requires, only briefly, that you pay attention.
If there are lots of trials on a subject, then quite by chance they will all give slightly different answers, but you would expect them all to cluster fairly equally around the true answer. You would also expect that the bigger studies, with more participants in them, and with better methods, would be more closely clustered around the correct answer than the smaller studies: the smaller studies, meanwhile, will be all over the shop, unusually positive and negative at random, because in a study with, say, twenty patients, you only need three freak results to send the overall conclusions right off.
A funnel plot is a clever way of graphing this. You put the effect (i.e., how effective the treatment is) on the x-axis, from left to right. Then, on the y-axis (top-to-bottom, maths-skivers) you put how big the trial was, or some other measure of how accurate it was. If there is no publication bias, you should see a nice inverted funnel: the big, accurate trials all cluster around each other at the top of the funnel, and then as you go down the funnel, the little, inaccurate trials gradually spread out to the left and right, as they become more and more wildly inaccurate—both positively and negatively.

If there is publication bias, however, the results will be skewed. The smaller, more rubbish negative trials seem to be missing, because they were ignored—nobody had anything to lose by letting these tiny, unimpressive trials sit in their bottom drawer—and so only the positive ones were published. Not only has publication bias been demonstrated in many fields of medicine, but a paper has even found evidence of publication bias in studies of publication bias. Here is the funnel plot for that paper. This is what passes for humour in the world of evidence-based medicine.

The most heinous recent case of publication bias has been in the area of SSRI antidepressant drugs, as has been shown in various papers. A group of academics published a paper in the New England Journal of Medicine at the beginning of 2008 which listed all the trials on SSRIs which had ever been formally registered with the FDA, and examined the same trials in the academic literature. Thirty-seven studies were assessed by the FDA as positive: with one exception, every single one of those positive trials was properly written up and published. Meanwhile, twenty-two studies that had negative or iffy results were simply not published at all, and eleven were written up and published in a way that described them as having a positive outcome.
This is more than cheeky. Doctors need reliable information if they are to make helpful and safe decisions about prescribing drugs to their patients. Depriving them of this information, and deceiving them, is a major moral crime. If I wasn’t writing a light and humorous book about science right now, I would descend into gales of rage.
Duplicate publication
Drug companies can go one better than neglecting negative studies. Sometimes, when they get positive results, instead of just publishing them once, they publish them several times, in different places, in different forms, so that it looks as if there are lots of different positive trials. This is particularly easy if you’ve performed a large ‘multicentre’ trial, because you can publish overlapping bits and pieces from each centre separately, or in different permutations. It’s also a very clever way of kludging the evidence, because it’s almost impossible for the reader to spot.
A classic piece of detective work was performed in this area by a vigilant anaesthetist from Oxford called Martin Tramer, who was looking at the efficacy of a nausea drug called ondansetron. He noticed that lots of the data in a meta-analysis he was doing seemed to be replicated: the results for many individual patients had been written up several times, in slightly different forms, in apparently different studies, in different journals. Crucially, data which showed the drug in a better light were more likely to be duplicated than the data which showed it to be less impressive, and overall this led to a 23 per cent overestimate of the drug’s efficacy.
Hiding harm
That’s how drug companies dress up the positive results. What about the darker, more headline-grabbing side, where they hide the serious harms?
Side-effects are a fact of life: they need to be accepted, managed in the context of benefits, and carefully monitored, because the unintended consequences of interventions can be extremely serious. The stories that grab the headlines are ones where there is foul play, or a cover-up, but in fact important findings can also be missed for much more innocent reasons, like the normal human processes of accidental neglect in publication bias, or because the worrying findings are buried from view in the noise of the data.
Anti-arrhythmic drugs are an interesting example. People who have heart attacks get irregular heart rhythms fairly commonly (because bits of the timekeeping apparatus in the heart have been damaged), and they also commonly die from them. Anti-arrhythmic drugs are used to treat and prevent irregular rhythms in people who have them. Why not, thought doctors, just give them to everyone who has had a heart attack? It made sense on paper, they seemed safe, and nobody knew at the time that they would actually increase the risk of death in this group—because that didn’t make sense from the theory (like with antioxidants). But they do, and at the peak of their use in the 1980s, anti-arrhythmic drugs were causing comparable numbers of deaths to the total number of Americans who died in the Vietnam war. Information that could have helped to avert this disaster was sitting, tragically, in a bottom drawer, as a researcher later explained:
When we carried out our study in 1980 we thought that the increased death rate…was an effect of chance…The development of [the drug] was abandoned for commercial reasons, and so this study was therefore never published; it is now a good example of ‘publication bias’. The results described here…might have provided an early warning of trouble ahead.
That was neglect, and wishful thinking. But sometimes it seems that dangerous effects from drugs can be either deliberately downplayed or, worse than that, simply not published.
There has been a string of major scandals from the pharmaceutical industry recently, in which it seems that evidence of harm for drugs including Vioxx and the SSRI antidepressants has gone missing in action. It didn’t take long for the truth to out, and anybody who claims that these issues have been brushed under the medical carpet is simply ignorant. They were dealt with, you’ll remember, in the three highest-ranking papers in the BMJ’s archive. They are worth looking at again, in more detail.
Vioxx
Vioxx was a painkiller developed by the company Merck and approved by the American FDA in 1999. Many painkillers can cause gut problems—ulcers and more—and the hope was that this new drug might not have such side-effects. This was examined in a trial called VIGOR, comparing Vioxx with an older drug, naproxen: and a lot of money was riding on the outcome. The trial had mixed results. Vioxx was no better at relieving the symptoms of rheumatoid arthritis, but it did halve the risk of gastrointestinal events, which was excellent news. But an increased risk of heart attacks was also found.
When the VIGOR trial was published, however, this cardiovascular risk was hard to see. There was an ‘interim analysis’ for heart attacks and ulcers, where ulcers were counted for longer than heart attacks. It wasn’t described in the publication, and it overstated the advantage of Vioxx regarding ulcers, while understating the increased risk of heart attacks. ‘This untenable feature of trial design,’ said a swingeing and unusually critical editorial in the New England Journal of Medicine, ‘which inevitably skewed the results, was not disclosed to the editors or the academic authors of the study.’ Was it a problem? Yes. For one thing, three additional myocardial infarctions occurred in the Vioxx group in the month after they stopped counting, while none occurred in the naproxen control group.
An internal memo from Edward Scolnick, the company’s chief scientist, shows that the company knew about this cardiovascular risk (‘It is a shame but it is a low incidence and it is mechanism based as we worried it was’). The New England Journal of Medicine was not impressed, publishing a pair of spectacularly critical editorials.
The worrying excess of heart attacks was only really picked up by people examining the FDA data, something that doctors tend—of course—not to do, as they read academic journal articles at best. In an attempt to explain the moderate extra risk of heart attacks that could be seen in the final paper, the authors proposed something called ‘the naproxen hypothesis’: Vioxx wasn’t causing heart attacks, they suggested, but naproxen was preventing them. There is no accepted evidence that naproxen has a strong protective effect against heart attacks.
The internal memo, discussed at length in the coverage of the case, suggested that the company was concerned at the time. Eventually more evidence of harm emerged. Vioxx was taken off the market in 2004; but analysts from the FDA estimated that it caused between 88,000 and 139,000 heart attacks, 30 to 40 per cent of which were probably fatal, in its five years on the market. It’s hard to be sure if that figure is reliable, but looking at the pattern of how the information came out, it’s certainly felt, very widely, that both Merck and the FDA could have done much more to mitigate the damage done over the many years of this drug’s lifespan, after the concerns were apparent to them. Data in medicine is important: it means lives. Merck has not admitted liability, and has proposed a $4.85 billion settlement in the US.
Authors forbidden to publish data
This all seems pretty bad. Which researchers are doing it, and why can’t we stop them? Some, of course, are mendacious. But many have been bullied or pressured not to reveal information about the trials they have performed, funded by the pharmaceutical industry.
Here are two extreme examples of what is, tragically, a fairly common phenomenon. In 2000, a US company filed a claim against both the lead investigators and their universities in an attempt to block publication of a study on an HIV vaccine that found the product was no better than placebo. The investigators felt they had to put patients before the product. The company felt otherwise. The results were published in JAMA that year.
In the second example, Nancy Olivieri, director of the Toronto Haemoglobinopathies Programme, was conducting a clinical trial on deferiprone, a drug which removes excess iron from the bodies of patients who become iron-overloaded after many blood transfusions. She was concerned when she saw that iron concentrations in the liver seemed to be poorly controlled in some of the patients, exceeding the safety threshold for increased risk of cardiac disease and early death. More extended studies suggested that deferiprone might accelerate the development of hepatic fibrosis.
The drug company, Apotex, threatened Olivieri, repeatedly and in writing, that if she published her findings and concerns they would take legal action against her. With great courage—and, shamefully, without the support of her university—Olivieri presented her findings at several scientific meetings and in academic journals. She believed she had a duty to disclose her concerns, regardless of the personal consequences. It should never have been necessary for her to need to make that decision.
The single cheap solution that will solve all of the problems in the entire world
What’s truly extraordinary is that almost all of these problems—the suppression of negative results, data dredging, hiding unhelpful data, and more—could largely be solved with one very simple intervention that would cost almost nothing: a clinical trials register, public, open, and properly enforced. This is how it would work. You’re a drug company. Before you even start your study, you publish the ‘protocol’ for it, the methods section of the paper, somewhere public. This means that everyone can see what you’re going to do in your trial, what you’re going to measure, how, in how many people, and so on, before you start.
The problems of publication bias, duplicate publication and hidden data on side-effects—which all cause unnecessary death and suffering—would be eradicated overnight, in one fell swoop. If you registered a trial, and conducted it, but it didn’t appear in the literature, it would stick out like a sore thumb. Everyone, basically, would assume you had something to hide, because you probably would. There are trials registers at present, but they are a mess.
How much of a mess is illustrated by this last drug company ruse: ‘moving the goalposts’. In 2002 Merck and Schering Plough began a trial to look at Ezetimibe, a drug to reduce cholesterol. They started out saying they were going to measure one thing as their test of whether the drug worked, but then announced, after the results were in, that they were going to count something else as the real test instead. This was spotted, and they were publicly rapped. Why? Because if you measure lots of things (as they did), some might be positive simply by chance. You cannot find your starting hypothesis in your final results. It makes the stats go all wonky.
Adverts
‘Clomicalm tablets are the only medication approved for the treatment of separation anxiety in dogs.’
There are currently no direct-to-consumer drug adverts in Britain, which is a shame, because the ones in America are properly bizarre, especially the TV ones. Your life is in disarray, your restless legs?migraine?cholesterol have taken over, all is panic, there is no sense anywhere. Then, when you take the right pill, suddenly the screen brightens up into a warm yellow, granny’s laughing, the kids are laughing, the dog’s tail is wagging, some nauseating child is playing with the hose on the lawn, spraying a rainbow of water into the sunshine whilst absolutely laughing his head off as all your relationships suddenly become successful again. Life is good.
Patients are so much more easily led than doctors by drug company advertising that the budget for direct-to-consumer advertising in America has risen twice as fast as the budget for addressing doctors directly. These adverts have been closely studied by medical academic researchers, and have been repeatedly shown to increase patients’ requests for the advertised drugs, as well as doctors’ prescriptions for them. Even adverts ‘raising awareness of a condition’ under tighter Canadian regulations have been shown to double demand for a specific drug to treat that condition.
This is why drug companies are keen to sponsor patient groups, or to exploit the media for their campaigns, as has been seen recently in the news stories singing the praises of the breast cancer drug Herceptin, or Alzheimer’s drugs of borderline efficacy.
These advocacy groups demand vociferously in the media that the companies’ drugs be funded by the NHS. I know people associated with these patient advocacy groups—academics—who have spoken out and tried to change their stance, without success: because in the case of the British Alzheimer’s campaign in particular, it struck many people that the demands were rather one-sided. The National Institute for Clinical Excellence (NICE) concluded that it couldn’t justify paying for Alzheimer’s drugs, partly because the evidence for their efficacy was weak, and often looked only at soft, surrogate outcomes. The evidence is indeed weak, because the drug companies have failed to subject their medications to sufficiently rigorous testing on real-world outcomes, rigorous testing that would be much less guaranteed to produce a positive result. Does the Alzheimer’s Society challenge the manufacturers to do better research? Do its members walk around with large placards campaigning against ‘surrogate outcomes in drugs research’, demanding ‘More Fair Tests’? No.
Oh God. Everybody’s bad. How did things get so awful?



Ben Goldacre's books