Saturday, June 17, 2006

Evidence-based stupidity

Back in the 1990s, before foreign fighters and eye-catching initiatives, one of the catchcries of the Labour Party as it arrived in government was "evidence-based policy". This meant, as far as anyone knew, that the government's activity would be subject to review on the basis of results, and that choices would be made according to (usually) statistical principles. It sounds an excellent idea, so what happened to it?

Certainly it didn't catch on. Perhaps the classic exhibit is drugs policy—according to the Senlis Council, the world's governments spend every year rather more than the final value of world illegal drug sales on trying to stop them being sold, with no discernable reduction in the availability or popularity of the drugs. There is clearly a case for a review of the evidence, and such things have been done by third parties like the Council, which concluded that it would be better if the enforcement budget was used simply to buy the complete opium crop, supply the pharmaceutical demand from this stock, and burn the rest.

Not that any government, especially not ours, has listened at all. Another example is this blog's old friend, the Home Office, which in pursuit of one set of numerical targets (to achieve a net increase in monthly deportations) succeeded in missing a whole selection of other aims (to keep various criminals confined, for one). What's going on, then?

Chris Dillow likes to talk in terms of managerialism versus technocracy, and I think the failure of evidence-based policy is closely connected with this. More specifically, the huge expansion of what looks like evidence-based policy, centrally defined numerical targeting, to be specific, in government has in fact been an exercise not in policy-making but in management. Most of the targets public servants are expected to hit are not ones that define a goal, but instead some sort of intermediate process. It's not about-for example-reducing the rate of heart attacks, but instead of achieving X prescriptions for low-dose aspirin.

This is a key point, because it defines both the information that the statistics provide and the use to which it is put. Rather than measuring the problem and using that information to decide on a course of action, this is measuring the action. And the main purpose of this sort of information is to check the obedience of subordinates. The question of what to do, whether the activity is useful, is external to the model.

That is to say, it is assumed that somebody, the somebody whose desk the stats land on, knows what the best course of action is and has prescribed it. This also fits in with the British civil service's deep play; there is a traditional, ingrained divide between "policy", which is high-status and concerned with cabinet papers, and "administration", which is Siberian-status and concerned with processing business. Guess which is going to have stats collected on it? Anyway. The problem is therefore to ensure compliance, which fits rather well with the narrative of "modernisers" and "reform" and confronting a cosy blah blah blah. The problem is not treating sick people, educating children, catching lawbreakers-the problem is the public servant, who must be treated like a servant.

Technocrats, much though there is wrong with 'em, are better on this score because they at least believe themselves to be technical, which suggests that the policy is determined by realities and can be altered in response to the results of experiments. (This is of course also a myth. Not only do scientists develop a tribal attachment to their discipline, it's even quite possible for engineers to become nationalistic about different radio encoding schemes.)

A canonical example of these problems occurred in the mid-1980s in British monetary policy. The Thatcher government started off by declaring its adherence to Monetarism, and putting this into practice by setting a target value for M4 broad money supply growth. Unfortunately, over the next few years, it was discovered that controlling M4 was extremely difficult. Rather, it was futile. Once the value of M4 was persuaded to decline, the values of its sisters M2 and M3 shot up as the suddenly growthful financial business, itself something the Tories were much in favour of, discovered means of getting around the policy. So, the scope of the policy was reduced, using more restrictive definitions of money like M2. M4, predictably, went up. Eventually they targeted the monetary base, that is to say cash. You'd think they could control that, but the fraction it made up of the total made the effort to control it pointless.

Eventually, Alan Walters persuaded Thatcher of this, over Patrick Minford's protests, and the policy was replaced with an exchange rate target. The whole affair is an example of Goodhart's Law, coined in 1975 by Charles Goodhart of LSE and the Bank of England, which states that to control is to distort. Broad money targeting failed, he argued, because it was like driving a car by reference to the speedometer alone. The policy itself created the feedback that was meant to guide the policy-maker. Goodhart went on to argue that, as a principle, targets should be "final goals" like inflation, unemployment, or the exchange rate - the best measure of problem-solving efforts being whether the problem was, er, solved.

In 1997, he eventually won, with the UK's monetary policy being redesigned on the principle of rules. Inflation was the measure of counter-inflationary policy, with a 2.5% RPIX symmetrical target, and interest rates the means. And the control of them was in the hands of a committee including, surprise surprise, one Charles Goodhart. (This probably makes him the UK's most influential academic of the last 30 years. Being an economist remains a good way to avoid fame.)

Now, let us consider what happens if the goals themselves are not static. If we are working from rules, using the evidence to decide the policy, it's difficult to go careering off after the latest headline in the Sun - unless One Punch Monstrosity Wade's current mood is part of the data set, naturally. But if the goals are assumed, being set by the presiding genius - well, then the whole "evidence-based" apparatus is rather like the great clock installed by the "Protector of Aborigines" of Western Australia, A.O. Neville, to enforce discipline on the souls dragooned into his camp, which to them was a weird torture engine operating on principles they couldn't guess.

Bruce Schneier Sterling agrees:
Society, having abandoned the scientific method, loses its empirical referent, and truth becomes relative. This is a serious affliction known as Lysenkoism.....

Politics without objective, honest measurement of results is a deadly short circuit. It means living a life of sterile claptrap, lacquering over failure after intellectual failure with thickening layers of partisan abuse.

1 comment:

Anonymous said...

The problem [as Kevin Carson of the mutualist would point out] is that as far as the worker is concerned the contrast is between mangerialism operating as a form of cult of personality, or technocratic Stakanovism - things like 'Quality' can operate in both spheres.

Usually you get both, and in all cases the aim is to mulct. Both are a manifestation of greed and distrust.

kostenloser Counter