It’s not just me. Really, it’s not. An article in the October 2010 issue of Australasian Psychiatry By Christopher Ryan, Olav Nielssen, Michael Paton, and Matthew Large has put very nicely the case against risk assessment. The authors use an analogy with the insurance industry (where the practice of risk assessment originated) to illustrate the problems with risk assessment in psychiatry, and I think it’s an excellent article.
Really it boils down, for me, to individuals versus groups. I discussed this in my very first Midweek Medicine post, drawing an analogy with road traffic fatalities: we can identify very clearly people who fit into higher-risk groups for crashing on the road, but we simply cannot identify which individuals from those groups will be the ones who crash and die.
Their insurance analogy is related but somewhat more comprehensive:
“Insurance policies share risk among policy holders. Each policy holder pays a small defined cost, in the form of a premium, in return for a larger compensatory payout in the event of an adverse event. Insurance companies manage the sharing of risk by setting different premiums for different categories of policy holder. These categories are determined according to the calculated probability of the adverse events occurring and the magnitude of potential loss.
“In conducting their business however, insurers make simple, reliable estimations of the probable total number of crashes or thefts within each group sharing the same risk. They do not attempt to predict and identify which individual policy-holder will make a claim, and insurance is not a way of reducing car accidents or thefts.“
The second point there is the same as mine: that we can make predictions about populations and groups much much better than we can about individuals. When a clinician “does a risk assessment” we do not predict the likelihood that patient will cause or come to harm. In fact we are simply assigning them to categories (low, medium, or high risk). No matter how much we “might feel, intuitively,” that we are predicting something, we are most assuredly not.
Still, what’s the harm, right? Better than nothing, right? Well … no. The bit that this insurance analogy adds to mine of road traffic crashes/fatalities is that of individuals paying a premium, which in psychiatry, is:
“…the personal and financial cost of additional treatment experienced by the patient, including coercive treatment imposed on ‘high-risk’ patients, whether or not they will actually cause or experience harm.”
(again, emphasis is mine)
To illustrate their points the authors chose to look at the MacArthur Violence Risk Assessment Study, and the risk assessment tool derived therefrom. They noted that there are numerous actuarial tools – which have been shown to be better than clinical assessment – for both violence and suicide. They chose this particular one as it is “supported by a substantial body of research and is perhaps the most proven risk assessment instrument devised for use in general psychiatric settings.” They examined the data supporting this tool:
157 patients – 55 categorised as high risk and 102 as low risk.
Observed for 20 weeks, during which 27 committed an act of violence.
71% classified correctly as being high or low risk.
Area under the “receiver operator curve” was 63%, which they say translates to “a 63% chance that a randomly picked patient who went on to commit an act of violence would have a higher score than a randomly picked patient who did not…”
The instrument had a sensitivity of 67.8% and specificity of 72.1%
Of the 157 patients there were 19 true-positive categorisations, 36 false positives, 9 false-negatives, and 93 true-negatives.
The authors of this paper go on to discuss in turn the effects of true-positives, false-positives, false-negatives, and true-negatives:
The first group (true-positives; those who were identified as being high-risk, and did go on to commit a violent act) pay a higher “premium” but do potentially gain in terms of more assertive treatment and additional input, which could potentially improve the course of their illness and reduce (not eliminate) the likelihood of adverse events.
The second group (false-positives) are those who have been assessed as at high-risk of violence, but do not go on to commit any violent act. They pay the same high premium as the former, but with no benefit in terms of avoiding harm. Possibly it could be argued they benefit in terms of more assertive treatment, but (1) this has to be weighed up against the more restrictive and coercive care, possibly higher doses of medication than they need, and (2) it can also be argued that directing extra resources to this group deprives patients who are clinically similar but not categorised as “high-risk”.
The third group, false negatives, are those who were categorised as low risk but did go on to commit some violence. They pay a low premium, but missed out thereby on treatments which might have improved their conditions and/or prevented an adverse event (possibly)
The fourth group is the true-negatives: those who were categorised as low-risk and did nothing violent. They pay a low premium and get no benefit.
Already it appears to me to be pretty clear that there’s definite potential harm from risk assessment, for little to no gain. To ram the point home further, the authors plug in some numbers (the 1 in 10,000 annual incidence of homicide by patients with treated schizophrenia) to the sensitivity and specificity values for the MacArthur tool. Basically the result is:
“In other words, 4117 patients would have to be detained or otherwise managed in a homicide-proof fashion for a year to try to prevent just one of those patients committing a homicide, and yet one in every 22,421 patients assessed to be ‘low-risk’ would commit a homicide in that period.”
An NNT of 4117 is pretty piss-poor, frankly. Especially when the treatment at issue is restrictive and coercive, and carries its own risks.
We have to stop doing this. Politicians and health departments have to stop mandating it, but we as a profession have to state clearly that it doesn’t work, it’s wasting scarce resources and is completely unjust, and it has the potential to cause harm by overtreating some and missing danger in others. As a profession we all have to realise this, and then help politicians and wider society to realise it.