Lies, damn lies and medical statistics
By Margaret McCartney
Published: November 24 2006 17:04 | Last updated: November 24 2006 17:04
I love statistics so much I recently took a course on the subject. Disappointingly, one of the conclusions I reached was that I am never going to be as good at statistics as I would like (I am 99.9 per cent sure of this).
Let me share my enthusiasm, though. Medical statistics have helped us to work out the best treatments for HIV and tuberculosis. They have revealed the link between cigarettes and lung cancer and shown us that childhood immunisations are safe. What’s more, they can reveal what is nonsense, hype or exaggeration. A proper understanding of statistics offers protection in the face of unscientific anecdotes being used to make a case for prescribing treatment.
I particularly love relevant and useful statistics. The following statistic is taken from the West of Scotland Coronary Prevention Study trial. This shows that you have to treat 107 men at high risk of heart disease for 4.6 years with statin drugs in order to prevent one death from any cause. This trial was a large, randomised, controlled trial investigating the ability of cholesterol-lowering drugs, statins, to reduce cardiovascular death. The statistic, called “the number needed to treat”, gives a clear perspective on the chances of potential benefit. By contrast, stating that the tablets will save hundreds of lives a year does not help because we need to know what the chances are of that life saved being ours.
I am not so keen on unhelpful statistics. For example, it could be said that the risk of a blood clot is multiplied by three when a woman is taking one type of combined oral contraceptive pill. This “relative risk” sounds quite alarming. However, the risk of a blood clot when not taking oral contraceptives is 5 in 100,000 women a year. The “absolute risk” therefore is three times this figure, 15 per 100,000 women a year, which is still quite small.
In 1995, the publication of a paper on the risk blood clots when taking a particular type of oral contraceptives caused a “pill scare”. Many women stopped taking their pills with predictable consequences.
It is clear to me that statistics, when used carefully, can make our certainties crumble and our supposedly fabulous treatments fail. Doctors and patients should not ignore this but use it to their advantage. Yet it appears that statistics are still largely unloved. Presumably this is because they are, in essence, hard sums. A study in the British Medical Journal earlier this year highlighted this. The researchers took a group of obstetricians, midwives and patients and gave them information on a hypothetical but close-to-real-life scenario. It concerned the probability of a positive screening test, carried out to assess the foetal risk of Down’s syndrome, being correct or not.
The groups were asked: “A blood test screens pregnant women for babies with Down’s syndrome. The test is a very good one but not perfect. Roughly 1 per cent of babies have Down’s syndrome. If the baby has Down’s syndrome, there is a 90 per cent chance that the result will be positive. If the baby is unaffected, there is still a 1 per cent chance that the result will be positive. A pregnant woman has been tested and the result is positive. What is the chance that her baby actually has Down’s syndrome?”
The answer is 47.6 per cent [see end of article for solution]. If you got it wrong, you are just like most people involved in this study: only 34 per cent of obstetricians, no midwives, and 9 per cent of patients got it right.
Given our track record of supposed breakthroughs and the overselling of medical treatments, it would make far more sense to get the interpretation of statistics right from the start. We need the help of statisticians to make things clearer. Those who are involved in medical research should be flying the flag and promoting sensible approaches. How else can we make informed choices about what interventions to accept?
As a minimum, medical journals reporting clinical trials should be obliged to provide a “community” as well as a “scientific” abstract. Instead of complex numbers that are difficult to interpret, we should be able to find the answers to questions such as what this means to me/my mother/my child. These statistics would contain meaningful data for doctors, patients and journalists, and this way, perhaps, it wouldn’t only be me who loves them.
Further reading
Bandolier study on number needed to treat with statins
British Medical Journal on cost-effectiveness of statins on low risk people
BMJ study on interpreting screening results
Margaret McCartney is a GP in Glasgow
More columns and further reading at www.ft.com/mccartney
Copyright The Financial Times Limited 2006
==
From: http://www.bmj.com/cgi/content/full/333/7562/284?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=midwives+obstetricians&andorexactfulltext=and&searchid=1&FIRSTINDEX=20&sortspec=relevance&resourcetype=HWCIT
Box 2: An explanation of how to derive the correct answer2
- If 10 000 pregnant women were tested, we would expect 100 (1% of 10 000) to have babies with Down's syndrome
- Of these 100 babies with Down's syndrome, the test result would be positive for 90 (90% of 100) and negative for 10
- Of the 9900 unaffected babies, 99 (1% of 9900) will also test positive, and 9801 will have a negative test result
- So, out of the 10 000 pregnant women tested, we would expect to see 189 (90+99) positive test results. Only 90 of these actually have babies with Down's syndrome, which is 47.6%
- Therefore, 47.6% of pregnant women who have a positive result to the test would actually have a baby with Down's syndrome
What is already known on this topic
Most people, including health professionals, do not draw mathematically correct inferences from probabilistic screening information
Some studies suggest that presentation as frequencies aids interpretation
What this study adds
Presentation as frequencies does not help everyone: a simple change from percentages to frequencies increased correct responses in obstetricians but not in midwives or service users
The change in presentation did change the type of errors that people made
Many respondents were very confident about their incorrect answers
Comments