Wednesday 12 May 2010

Bye-bye RDA's?

Pre-election neither the conservatives nor the lib-dems were particularly pro the RDA's. So I thought I would take the opportunity of a quiet day to see if I could get my head around the PwC evaluation of RDAs (which was published last year) and get a clearer idea of their effectiveness.

The crucial thing that I would like to understand is the net (i.e. additional) impact of RDA expenditure. This differs from the gross (or measured) outcomes because, loosely speaking, some of the stuff that RDAs pay for would have happened anyway. Understanding the net impact is crucial for figuring out how the benefits of RDA expenditure compare to the finanicial costs.

The PwC report provides figures for this. e.g. the RDA's have spent money creating or safeguarding 471,869 jobs (gross). The additionality percentage is 45% so more than half of these jobs would have existed anyhow in the absence of intervention, implying 212,873 jobs (net). Additionality for land remediation is higher at 71%.

The PwC evaluation arrived at these figures by reading through many individual evaluations done by consultants for the RDAs. The RDA evaluations in turn follow the framework set out by another consultancy report: "Evaluating the Impact of England's Regional Development Agencies". This report only sets out a framework for getting at additionality, referring the reader requiring more details to another consultancy report for English Partnerships: "The Additionality Guide". This provides some ready-recknors for additionality taken from a number of different sources. For example, on p.15 you can find figures for deadweight (one component needed to calculate additionality) from the City Challenge report. These figures come from a survey of beneficiaries and project managers.

To be clear, asking beneficiaries of a service (or the people who are paid by government to provide a service) whether it does any good is not a particularly robust evaluation methodology. At the very least, we might worry that they have strong incentives to say that the project did a lot of good.

Moving along, p. 16 of the Additionality Guide provides numbers for additionality from the Neighbourhood Renewals Fund Evaluation. I couldn't find the matching table in the NRF report, but I did find a very impressive looking formula for calculating additionality on p. 62:
AI = [GI x (1-L) x (1-Dp+s)/2]*[1-D]
where GI is gross impact and everything else on the right hand side measures some component of additionality (e.g. Deadweight D).

How did the NRF evalution find out the values of these things on the right hand side? Well, they asked intervention managers and coordinators (in other words the people who are being paid to deliver the service) whether they thought deadweight had been very low, low, medium, high, very high and similarly for the other components.

Notice that these are very difficult questions to answer. At least in a medical trial if someone asks whether you are "feeling" better you can provide an accurate answer. Here, these managers are being asked to answer questions about whether jobs in the area would have been created anyhow (deadweight), whether this has been at the expense of jobs at other firms (displacement) etc. The whole exercise does feel rather circular - after all I was reading the RDA evaluations hoping to find answers to these very questions because I have no idea of the additionality of the interventions. 1,000 pages to find out that I was possibly getting an idea of someone else's best guess of the effects was, should I say, slightly disappointing (and may explain the length of this blog).

But, of course, the story doesn't end there because the PwC evaluation looks at the RDA evaluations which use the framework, incorporating the additionality guide which draws on the NRF report etc. So, it is possible that the RDA evaluations themselves have better evidence drawing on, say surveys of both beneficiaries and non-beneficiaries.

Fortunately, many of the evaluations are available on the web, so I downloaded all the evaluations from Advantage West Midlands that were used by PwC. There are 9 of them comprising another 1,000 or so pages of analysis. Here is what I found:
- Regeneration Zones (£280m): additionality based on interviewing 40 project managers responsible for 50 supported projects out of a total of 300 projects
- Land and property (£261m) surveys on 39 project deliverers, interviews with 12 property developers and surveys on 88/180 occupiers who moved to the new sites
- SRB (£218m) used national SRB figures - which, btw - also appear in the additionality guide and are based on interviews in 10 case study areas
- Clusters programme (£72m) surveyed 751 beneficiaries out of 10,930 businesses
- Skills (£47m) interviewed 80 of 6029 individual beneficiaries
- Rover (£36m) used national ready recknors (I think - I was getting tired by this point)
- Technology Corridors surveyed 210/2052 business, 40/237 tenants, 65/1615 individuals (all beneficiaries)
- PARD (£32m) 38 firms out of 592 assisted
- The Market Towns Initiative, Mercia Spinner and Midlands Excellence projects (all quite small) also interviewed beneficiaries.

In short all these reports are based on surveys or interviews with beneficiaries with no comparison to non-beneficiaries. After 2,000 pages I am no clearer on the additionality provided by RDA expenditure.

This is a little depressing because, to make it very clear, I think that the careful evaluation of policy is important. Increasingly, I begin to think the problem is that government funded evaluations of spatial interventions simply try to answer too many questions (this point may apply more generally). I think we need to focus effort on getting a proper understanding of the impacts on a smaller range of outcomes for the more major policy interventions. Less important interventions or outcomes could be subjected to something much more light touch. It could be cheaper, and it certainly should be more convincing on what works. Both important things in a time of budget cuts.

Talking of cuts, I am still no clearer on RDA effectiveness ...