Yesterday, I suggested enterprise zones might be amenable to 'open evaluation' because the set up of the competition may make analysis of the scheme attractive to the growing number of academics interested in more rigorous approaches to policy evaluation.
EZs are certainly not unique in this regard. The Regional Growth Fund will not fund all projects that are submitted. Depending on how the decisions are made access to, say, the rankings of projects would allow researchers to compare outcomes for otherwise similar areas that were just above or below the bar when it came to getting funded. The Local Entreprise Growth Initiative had two rounds of funding (allows for the strategy of using the second round as a control group for the first) as well as a discrete cut-off for eligibility (so we can use areas that are 'just' ineligible as a possible control group). In addition, some LEGI applicants weren't funded. Finally, LEGI applied to discrete areas (local authorities) which are somewhat arbitrary in terms of the way the economy works - suggesting that comparisons across LEGI boundaries may provide useful information on the causal impact of LEGI. To take another example, the Single Regeneration Budget had multiple stages, successful and unsuccessful bids and some projects that targeted specific areas. The official evaluations of LEGI and SRB made little, if any, use of these programme features to help identify the causal impact of the policies. In contrast, work at SERC is already using these features to help assess the impact of LEGI and SRB.
I think there are several reasons for this, most of which stem from the fact that departments fund and supervise evaluations of their own policies. I think this creates multiple problems. First, convincing evaluation usually requires considerable effort focused on a small number of outcomes. In contrast, centrally funded evaluations are often very wide ranging, covering multiple outcomes. Second, they often have a strong focus on process (how were budget sets, how did the partnerships operate) rather than outcomes. Third, civil servants and ministers do not like negative evaluations of policies that their departments implement. Fourth, consultants and academics that undertake these evaluations want repeat business so they have strong incentives to make sure departments are happy with evaluations. Fifth, there are time pressures to deliver 'quick' answers. This is not an exhaustive list, and these issues certainly do not arise for all evaluations and all departments equally. Despite that caveat, I do think these issues present major problems for department funded evaluations of departmental policies. As the resulting evidence base is often poor, I suspect this also partly explains why governments often feel that they can ignore the results of their own evaluations when it comes to policy design.
So what is to be done? In depth analysis of processes (how money is spent, how are decisions made?) will still, I suspect, need to be funded by government departments. I think it would help to distinguish such process evaluation/audit from outcome evaluation. I also think that decentralisation means that learning much from these exercises may became harder. So we will need to come up with more innovative ways of helping organisations learn from one another on what process work. For outcome evaluation, I think department funded evaluations should have much more external oversight. Perhaps that could be a role for an independent national evaluation office (or an expanded audit office). I also think outcome evaluations should be much more closely targeted on a few key outcomes where it is possible to evaluate the causal impact of policy. Finally, as I wrote yesterday, transparent and much more careful documentation of the policy making process would allow 'open evaluation' of many policies (local as well as national). Such open evaluation has the potential to be focused, much cheaper and independent of governement, with all the benefits that entails.