To take a break from high theory, I thought I’d formulate a few thoughts on the current hot topic in Anglophone Blogtopia, the NHS and socialised healthcare.
It rather seems that there are two sorts of things going on. One is a nuanced, complex, open-ended discussion on the best ways of organising healthcare, and one is a shouting match. The sort of approaches that make sense in one are inappropriate in the other. And the fact that one is happening can get in the way of the other.
I certainly wouldn’t say one is better or more noble – it might be nice if politics was entirely a matter of calm and reflective discussions but in the sort of world we inhabit (in particular, a world of class conflict) it isn’t, and when one side is shouting there’s no point in the other side not shouting, or shouting less loudly than they can.
And when one side is loudly proclaiming that Stephen Hawking, in a socialised healthcare system such as the one he is in, would not be kept alive as he has been, in precisely such a system, it’s clear that shouting is in order.
And I have no doubts about which side of that shouting match I’m on. Yay socialised medicine, boo private health insurance. To put it another way, given that healthcare does need to be rationed (being scarce), how rich people are is not a rational factor by which to ration it. So yay and boo.
So having said that, I had some random thoughts about institutional design which shouldn’t be taken as implying anything about the yays and boos.
The observation, which is based partly on personal experience, and partly on an argument put to me by a prominent right-libertarian, concerns the approval of new treatments by safety regulators, and their subsequent use while still risky and untested.
There are two sorts of mistake that you can make – over-caution and over-confidence. You might hold back or not use a treatment which would be safe and save lives (error A), or you might go ahead and use a treatment which turns out to cause serious problems and cost lives (error B).
If we want to optimise our decisions here, we would want decision-makers to have negative incentives for both cases in proportion to the harm done. But there’s a big asymmetry. Error B can be very easily detected – people can pay big costs if they permit the use of a dangerous treatment and it goes wrong. But error A is often undetectable – nobody knows what would have happened if a withheld treatment had been used, and even if someone else uses it and it turns out to be ok, the personal costs to you will be much less than for error B.
Partly this is just an information problem that can’t easily be overcome. But it could become a bigger problem than it needs to if decisions like this are made by people who get no personal benefit from avoiding errors of type A, but do get a benefit from avoiding errors of type B.
An civil servant whose job is simply to make safety licensing decisions is liable to be precisely such a person – if they let out something dangerous, they get a lot of flak, but if they hold back something safe, nothing happens.
A private company motivated by profit, however, isn’t so much. If it holds back something safe, it at least forfeits the profits it would make from that treatment. So there’s the potential here for profit-motivated companies to make better decisions than the standard sort of regulator.
In fact, it may therefore be that lobbying of regulators by private companies could have a positive impact, if it reduced an irrational, structural over-caution.
But that’s because the market has a certain bizarre and distorted ‘democracy’ about it – it represents all people, and all people’s desires, in the form of the money they’re willing to offer. The advantage of the private company comes from their being ‘closer’ to the patients themselves, who obviously also have an incentive to let through even quite risky products – they might save their lives.
But of course, the market performs its own alchemy on those desires – most obviously, it weights them according to their pre-existing wealth, and so diverges from democracy in proportion to inequality. Also, y’know, externalities, asymmetric information, etc.
So really we would want a system that was more properly and reliably democratic than the market. To a certain extent we move towards this when we shift from civil servants in an office somewhere to doctors in a hospital having to watch people die – if we allow ourselves to assume that one person’s interests can be transmitted to another by empathy, and more forcefully when the contact is closer. Which seems a reasonable assumption.
But hypothetically a society organised along directly democratic lines (like, say…communism) could improve on both – although the challenge, of course, would be to reconcile and balance the democracy that guarantees the right incentives, the expert knowledge that’s required(?) for such decisions, and the efficiency of cutting down the numbers involved in decisions.
Short of that, we may have to choose between different flawed set-ups whose differences might need empirical data to evaluate in areas like this.
Take-home lesson, which is true despite coming from a right-libertarian: the state structures meant to ‘regulate’ markets can suffer from perverse incentives and collective action failures just as much markets.