Monday 24 September 2012

Evaluation and self-report additionality.

As I have written before, I like the idea of making information about policy interventions more freely available. Partly because I think it's an important principle of open government but also because it allows for better policy evaluation.

I have been reminded of this in the past couple of months because, together with colleagues, I've been looking through quite a lot of government evaluation reports. One thing that has struck me as I have read through those reports is the willingness of people to rely on self-reported 'guesstimates' of additionality (i.e. what 'extra' happened as a result of the policy intervention). These are usually provided either by recipients of the money or by people directly involved in handing out the money.

Like many economists, I tend to take these with a pinch of salt. Partly this is because I have the economists natural distrust about asking people to evaluate the counterfactual - i.e. what would have happened in the absence of the grant. This is a very difficult thought experiment at the best of times and one that is surely made more difficult with policy evaluation because the people being asked are often receiving money (or some kind of benefit in kind) from the operation of the policy.

I get even more worried, however, when these self-reported additionality figures are used as the basis for comparisons across different policy areas or different types of recipients. Why should we expect a young unemployed worker assessing the additionality of a training scheme to give us numbers that can meaningfully be compared to those from a scheme supporting R&D? More subtly, even within schemes, why should we expect the answers to such questions to be the same across, say, small and large firms? Of course, one reason why the answers might differ is because the policy actually differs in terms of additionality for the different types of interventions etc. But more worrying is that the answers might differ depending on characteristics of the policy that have nothing to do with whether the policy has any impact on behaviour.

There doesn't seem to be a big literature unpacking this problem. I have a vague recollection of one paper (possibly by Heckman) which suggested that self-reported additionality tended to be positively correlated with things that were, somewhat worrying, negatively correlated with econometrically estimated additionality (based on what people do, not on what they say). But I don't know of many other references (and would be happy to receive some pointers).

In light of these concerns, it is a little depressing that such self-reported additionality appears to remain remarkably popular with many in the policy making community.