Explaining risk: Know your Aristotle
I was shocked to discover recently that a professor colleague had not grasped the difference between absolute and relative risk. He was excited by an intervention that reduced the risk of a primary outcome by 50%. He didn’t know (and hadn’t thought to ask) what the absolute risk of that outcome was. And he seemed unimpressed when I explained that a 50% relative risk reduction might mean going from 80% to 40% or from 0.02% to 0.01%.
I asked my Twitter friends to suggest links to help me convey the point. Several offered links to the Oxford-based EBM resource Bandolier or the Canadian Medical Association Journal series. Others suggested sources intended for lay people such as the Patient UK website or books like Smart Health Choices or Testing Treatments.
I found all these sources useful to remind me of the formulae but limited value as a teaching resource. If you’re not very numerate, the instruction to posit a numerator and a denominator and then divide one by the other simply won’t inspire – even when fictitious characters are introduced (“Pat’s 10-year risk of stroke is…”).
Psychologists have produced evidence from cognitive experiments on how to communicate risk, which David Spiegelhalter has summarised. He points out, for example, that many people are confused by terms like “chance” or the use of percentages, so the use of natural frequencies (such as “one in ten”) is better. He also reminds us that different framings of these frequencies – positive (“glass half full”) versus negative (“glass half empty”) – will lead to different perceptions of benefits and harms.
Whilst knowing about cognitive biases was helpful, I wanted a teaching byte that connected with the affective component of learning (“why should I care about this?”) as well as with the cognitive component (“what do I need to learn?”). Interactive websites where changes in absolute risk are expressed as coloured blobs or smiley/sad faces help convey dull formulae through drill and practice. But first you’ve got to get the learner to the point where they reach out and grasp the mouse.
This example in five tweets from David Eyles did it for me: (1) Huge trial with tiny effect size: 3 patients die out of 100 with condition. New drug reduced this to 2. RR = 33% better. (2) AR gives 1% better. RR used by Pharma to sell drug using “33%” improvement” statistic. Drug gets given to 100 patients to help one. (3) Patient says “Why take drug for rest of life if chances of help are only 1%? What are side effects?” (4) Side effects are weight gain and high BP with AR of 20%. Risk of death from these now higher than benefit. (5) RR useful to Pharma. AR useful and understood by patient.
David’s example was based on James Penston’s book ‘Fiction and fantasy in medical research: The large scale randomised controlled trial.’ The story-fragment has appeal because he has turned the teaching byte into a narrative with two key characters – a villain (‘Pharma’) who seeks to maximise profit at the expense of a victim (the patient), and who is deliberately seeking to present the facts in a distorted way (that is, the villain has a dastardly motive).
As Aristotle observed, all stories involve ‘trouble’ – in this case, the possibility that the innocent patient (a classic underdog, in literary terms) may be harmed. The question “Why take drug for rest of life…?” is a clever rhetorical device which encourages the learner to climb into the skin of the potential victim and contemplate the impending trouble from his or her perspective. Suddenly, much is at stake here: it matters what the numerator, denominator and missing values are in the equations.
In sum, the use of the story form – with evil villains, powerless victims, trouble and things at stake – is a powerful tool for engaging learners. Ben Goldacre presents the same villain-victim dyad in comic melodrama genre. Under the heading “You are 80% less likely to die from a meteor landing on your head if you wear a bicycle helmet all day," he depicts Pharma (‘corporate whoredom’) as bent on persuading the rest of us to spend money unnecessarily on medicines whose potential to reduce our absolute risk of serious trouble is vanishingly tiny. But as Goldacre reminds us in another skilful trope, “We’re all suckers for a big number.”