The government has announced that 29 sites will compete to host the final 10 enterprise zones. For those of us that like to think about the causal impact of urban policies this could be good news. When trying to figure out whether a policy has any impact, part of the problem is figuring out what would have happened in the absence of intervention. With these new EZs, the 19 sites that lose in the competition may provide a reasonable control group for the 10 that win. Comparing outcomes for the two groups may then tell us whether those that won EZs actually do better. We could also compare those that entered the competition to areas that appear to be similar but didn't enter the competition (to see whether those that entered the competition somehow differ from those that don't). The timing of EZs gives another avenue to explore. Those given money in the first round should start improving before those given money in the second. If they don't, that raises questions about whether EZ caused any improvement or instead whether this was caused by some other factor (say a strenghthing economy).
Undertaking a policy evaluation of this kind would substantially improve our understanding of whether EZs generate or mainly displace economic activity. This would help future governments when they decide whether to maintain or re-introduce such a scheme (and remember EZs aren't exactly a new phenomena).
Even better, I suspect that the government could get this analysis for free (or very cheaply) because this kind of evaluation has the potential to be published in top academic journals (in fact, the strategies that I suggest are taken from a paper evaluating US EZs published in one of the top economics journals). This won't work for all policies (because the degree of academic interest will depend on the policy 'design') but will work for a good proportion of them. When it does work, policy evaluation of this kind doesn't need to be big expensive and centralised, it can be outsourced, by using open evaluation in the academic (and wider non-governmental) community.
A first step in moving towards this open evaluation model would require good information to be recorded for all bids whether successful or not. This step would involve a small amount of expenditure - although nearly all this information will be processed when appraising the bids before a decision is made. The only additional cost here involves doing this in a consistent, well documented manner.
A second step would be for the government to be transparent about the decision making process. How were the winning bids selected. I am sceptical that this will happen. Fortunately, while this doesn't help evaluation it certainly doesn't rule it out.
Next the government needs to make details of the scheme, decision making process and the information on accepted and competing bids 'publicly' available. Of course, some of the information may be confidential (more so when it comes to individuals or firms than areas), in which case publicly available may mean that people have to apply to use the data in one of the new secure environments (the ESRC funded Secure Data Service, the Office for National Statistics VML). Again, there will be some small cost to maintaining this data and providing access to it.
Finally, government needs to be patient. To perform the kind of analysis laid out above will require data on firm performance, employment, unemployment etc for a lot of areas across the UK. That data is usually only available with a time lag of several years. But once the data becomes available, researchers will then spend many (unpaid) hours figuring out whether the policy in question had any causal impact on outcomes that we care about.
So, with a little patience and transparency, open evaluation has the scope to significantly increase our understanding of the causal impact of government urban policy at very little cost. If you like, it's the 'big society' approach to evidence base policy making.