Monday, 24 October 2011

Millennium Villages and the Analysis of Place-Based Policies

Interesting to see the arguments about whether or not the Millennium Villages Project could have been better evaluated (either through random placement, or through careful attempts to identify suitable control villages). Part of the problem, according to those defending the project's approach, is that the use of more rigorous evaluation approaches is not possible for place-based policies. I agree with others that this argument is wrong. Evaluation of place-based policies using these approaches might be harder, but it is still feasible.

As a reminder - the major problem in evaluating the causal impact of these kind of schemes is what would have happened in the absence of the intervention. Random placement helps get round this because villages that are not chosen then provide a suitable comparison (this is the idea underlying many medical trials). Governments find random selection difficult because many policy makers assume that their interventions will be effective. Starting from that assumption, randomly selecting individuals to receive treatment is difficult because you have to deny treatment to others. If, in contrast, you start with the assumption that policy will be ineffective, then you are much more sanguine about allocating it randomly. In addition to this standard problem, it appears that place-based policy makers have even more trouble with randomisation of place-based policy.

But even in the absence of random allocation, place-based policies can still be evaluated by looking for suitable comparision groups (so that treatment is as good as random). For example, the UK government recently ran a competition to see which locations should get enterprise zones. In the first round of the competition 29 sites competed to host the final 10 enterprise zones. For those of us that like to think about the causal impact of urban policies this could be good news. As just discussed, when trying to figure out whether a policy has any impact, part of the problem is figuring out what would have happened in the absence of intervention. With these new EZs, the 19 sites that lose in the competition may provide a reasonable control group for the 10 that win. Comparing outcomes for the two groups may then tell us whether those that won EZs actually do better. We could also compare those that entered the competition to areas that appear to be similar but didn't enter the competition (to see whether those that entered the competition somehow differ from those that don't). The timing of EZs gives another avenue to explore. Those given money in the first round should start improving before those given money in the second. If they don't, that raises questions about whether EZ caused any improvement or instead whether this was caused by some other factor (say a strenghthing economy).

EZs are certainly not unique in this regard. The UK's Regional Growth Fund will not fund all projects that are submitted. Depending on how the decisions are made access to, say, the rankings of projects would allow researchers to compare outcomes for otherwise similar areas that were just above or below the bar when it came to getting funded. The Local Enterprise Growth Initiative had two rounds of funding (allows for the strategy of using the second round as a control group for the first) as well as a discrete cut-off for eligibility (so we can use areas that are 'just' ineligible as a possible control group). In addition, some LEGI applicants weren't funded. Finally, LEGI applied to discrete areas (local authorities) which are somewhat arbitrary in terms of the way the economy works - suggesting that comparisons across LEGI boundaries may provide useful information on the causal impact of LEGI (including whether or not there is displacement or positive spillovers - a worry in the Millennium Villages project). To take another example, the Single Regeneration Budget had multiple stages, successful and unsuccessful bids and some projects that targeted specific areas.

Official evaluations of place-based policies make little, if any, use of these programme features to help identify the causal impact of the policies. I can think of many reasons why governments may not like their policies to be effectively evaluated but how depressing is it to see economists making it easier for them to avoid being held to account by suggesting that rigorous evaluation of place-based policies is not possible.