Wednesday, 19 May 2010

Big Society: Local Planning

From the CLG highlights, I think that for this to work:
"Giving communities a greater say over their local planning system"
we will need this:
"a comprehensive review of local government finance"
or something similar to provide sufficient incentives to make sure that new commercial and residential building can happen in popular places where people and firms want to locate.

Devolving decisions coupled with a system that provides decent incentives should be a better solution than top down spatial plans. But the first without the second will spell trouble in the form of restricted supply leading to higher (and more volatile) house prices and commercial rents.

Monday, 17 May 2010

Cuts, cuts, cuts

RDA's spend around £1.5bn per year. Administration accounts for about 7% of this. The Homes and Communities Agency is budgeted to spend £6bn this financial year. Administration accounts for less than 2%. It should be clear that efficiency savings in the delivery (e.g. through abolishing these quangos) will only make a small difference to the overall expenditure. The new government is going to have to cut programme expenditure. So the crucial question is: what should it cut?

I would significantly decrease government expenditure on the provision of commercial real estate. In a few places (the centres of Manchester, Leeds, Newcastle) such expenditure has probably positively reinforced developments in the private sector. In poorer areas and cities with weaker economies, I believe that shiny new government funded buildings simply transfer employment from other places in the immediate area and as a result don't provide significant employment benefit.

I would continue some of the public realm expenditure in more deprived neighbourhoods. I would focus on the public good (amenity) benefits these deliver to poorer families. For example, public space provision (e.g. parks) likely has large amenity benefits in poorer neighbourhoods where families have little private space. The benefits of signature buildings and public art are, shall we say, less clearly defined.

Improving the quality of private space (e.g. via decent homes) clearly benefits poorer families, although it may cause other problems - e.g. decreased incentives to work. That said, crappy social housing is surely not the best way to deal with these problems so some component of decent homes expenditure should stay.

The government should spend less on building houses. According to the HCA around a half of the homes delivered this year will have been (part) funded by the government. This figure is distorted by the recession but why does the government need to be so involved in the provision of affordable housing? The simplistic answer is that "house prices are too high". But what drives this are policies which raise costs: e.g. the national brownfield building target (because building on polluted land is expensive) and supply restrictions from the planning system. In effect, government restrictions raise land costs, which raise the price of housing, which makes housing unaffordable, so the government spends large amounts of money to provide affordable housing. Government has it in its power to change these restrictions (e.g. through taxing developers and land owners and giving the money to local homeowners, LAs etc, negatively affected by new building).

Moving away from the built environment, there is a case to be made for reducing the amount spent on corporate welfare (specifically giving money to business to locate in poorer areas). I would stop funding pretty well everything which simply shuffles around the existing pool of high skilled workers and focus instead on things that directly improve the employment prospects of poorer residents. Generally, I think that corporate welfare is an effective way of getting firms to locate in places that they otherwise wouldn't. But it is expensive and it doesn't turn areas around (at least on the basis of fifty years expenditure to date).

If the cuts fall in all these areas, it would allow us to protect expenditure that I think matters most - that on improving educational outcomes of poorer children and improving the labour market prospects of poorer adults.

Friday, 14 May 2010

RDAs: It's what you do, not the way that you do it

Before the election, I prepared an election briefing for the CEP on regional and urban policy. It touched on some of the difficulties in evaluating RDAs and concluded:"In short, there is no compelling evidence as to whether the RDAs are a good or bad thing. Labour is committed to them; the Conservatives and Liberal Democrats are (probably) committed to abolishing them. It should be clear that these positions cannot truly reflect evidence-based positions on RDAs’ effectiveness." Revisiting the evaluation evidence over the last few days really hasn't done anything to change my opinion on this.

Even if you look at the overall growth performance it's still impossible to make an evidence-based judgement. Individual region growth rates and the gap between the growth rates in the North and South are essentially unchanged in the periods before and after the introduction of RDAs. If you think the underlying trends (net of the effect of government policy) would have been the same in the two periods, then RDAs were more effective than the previous arrangements if they spent less money (and vice-versa). What if you went to the data and the RDAs had spent more? This looks bad for the RDAs unless you think that things would have got progressively worse for the Northern regions in the absence of intervention, so we had to spend more to stand still. In short different assumptions on the counterfactual (what would have happened in the absence of government intervention) allow you to reach different conclusions, but as the counterfactual is unobserved the aggregate data can't help us much either.

In the end, what we are left with are the broad arguments around costs and benefits of different arrangements. My feeling is that, on balance, the somewhat arbitrary regional structure makes less sense than something based around groupings of Local Authorities. The latter have democratic legitimacy. Such groupings are also more likely to end up covering "functional" economic areas (i.e. sub-national areas in which intra-area economic interactions important). Although, the evidence on whether this makes much difference is surprisingly limited.

In the end, given the evidence we have, I think that we are better placed to answer questions about what policy should do, rather than how it should be delivered. More on this next week.

Thursday, 13 May 2010

RDAs and evaluation: A bit more value added

It suddenly occurs to me that I am in danger of appearing contradictory. I complain that I can't learn a lot from the cross-cutting RDA evaluation but then I suggest that its an example of an area where systematising evaluation across subnational organisations may have big payoffs.

Let me clarify - I think systematic evaluation is better than no systematic evaluation. My major problem with the cross-cutting RDA evaluation is the reliance on user benefit surveys to calculate additionality. The recent practical guide from BIS notes: "To assess the net impact of an intervention, information is needed on the situation that would exist both with and without the intervention. The standard approach is to use a beneficiary survey – asking questions on the impact [...] and getting beneficiaries to estimate what would have happened otherwise. There are clearly limitations with this methodology. A more robust approach is to compare the change in activity and outputs of beneficiaries before and after the intervention against the achievements of a control group (i.e.: people/businesses that would have been eligible for support but did not receive it). However, results are dependent on identifying an appropriate control group, which is not possible in many cases. [...] In addition, a control group approach is usually more expensive than beneficiary surveys. As a result, using a survey of beneficiaries is generally the preferred approach when balancing costs and benefits of the two methods."

I agree with much of this. But the point is that when you are talking about spatial policy there are complex interactions that mean the impacts extend beyond the direct beneficiaries. Further the beneficiaries have absolutely no idea about the nature of these complex interactions. As a result beneficiary surveys are often not a great way of evaluating spatial policy. To be clear, hopefully better than nothing, but not great.

Quite simply, when you are spending large amounts of money on spatial policy (collectively the RDAs have spent around £15bn) by all means use "light touch" beneficiary evaluation for some (even most) of it. But (i) you shouldn't read too much in to the results and (ii) at least some of your evaluation work needs to try and tackle these issues through more robust analysis.

Evaluation and decentralisation

After spending yesterday reading through some of the extensive material on RDA evaluations I suggested that I wasn't much clearer on the impact of RDAs. But it did get me thinking on the relationship between evaluation and decentralisation.

I confess to knowing very little about the history of the RDA evaluation exercise, but I was interested to read from the national audit office that (with the notable exception of EMDA) an assessment in 2006 found: "impact evaluation to be one of the weakest elements of their performance". BIS and the RDAs responded to this by drawing up the framework that I discussed yesterday and RDAs were then asked to use that framework to evaluate properly impacts by 2008. A follow up review in 2006 found that out of 400 projects "only in a very few cases had RDAs used a robust methodology to forecast or measure outputs and outcomes". By December 2007 external "consultants found that only about 40 per cent of [evaluations] were sufficiently compliant with the Framework". With further intervention 70% of projects had been covered by the time of the overall evaluation I discussed yesterday. Subsequently BIS has published further guidance to help make evaluations more uniform in future.

The broader question that all of this raises for the new government is what kind of frameworks for evaluation to put in place when decentralising powers to subnational organisations (LAs, RDAs - or whatever replaces them). The RDA experience would suggest that this is one area where systematising practice across organisations has big payoffs. Despite these obvious benefits, Local organisations are often hostile to this process because it appears to trample on local autonomy. Whatever changes to delivery the new government is planning it will be interesting to see how they try to square this particular circle.

Wednesday, 12 May 2010

Bye-bye RDA's?

Pre-election neither the conservatives nor the lib-dems were particularly pro the RDA's. So I thought I would take the opportunity of a quiet day to see if I could get my head around the PwC evaluation of RDAs (which was published last year) and get a clearer idea of their effectiveness.

The crucial thing that I would like to understand is the net (i.e. additional) impact of RDA expenditure. This differs from the gross (or measured) outcomes because, loosely speaking, some of the stuff that RDAs pay for would have happened anyway. Understanding the net impact is crucial for figuring out how the benefits of RDA expenditure compare to the finanicial costs.

The PwC report provides figures for this. e.g. the RDA's have spent money creating or safeguarding 471,869 jobs (gross). The additionality percentage is 45% so more than half of these jobs would have existed anyhow in the absence of intervention, implying 212,873 jobs (net). Additionality for land remediation is higher at 71%.

The PwC evaluation arrived at these figures by reading through many individual evaluations done by consultants for the RDAs. The RDA evaluations in turn follow the framework set out by another consultancy report: "Evaluating the Impact of England's Regional Development Agencies". This report only sets out a framework for getting at additionality, referring the reader requiring more details to another consultancy report for English Partnerships: "The Additionality Guide". This provides some ready-recknors for additionality taken from a number of different sources. For example, on p.15 you can find figures for deadweight (one component needed to calculate additionality) from the City Challenge report. These figures come from a survey of beneficiaries and project managers.

To be clear, asking beneficiaries of a service (or the people who are paid by government to provide a service) whether it does any good is not a particularly robust evaluation methodology. At the very least, we might worry that they have strong incentives to say that the project did a lot of good.

Moving along, p. 16 of the Additionality Guide provides numbers for additionality from the Neighbourhood Renewals Fund Evaluation. I couldn't find the matching table in the NRF report, but I did find a very impressive looking formula for calculating additionality on p. 62:
AI = [GI x (1-L) x (1-Dp+s)/2]*[1-D]
where GI is gross impact and everything else on the right hand side measures some component of additionality (e.g. Deadweight D).

How did the NRF evalution find out the values of these things on the right hand side? Well, they asked intervention managers and coordinators (in other words the people who are being paid to deliver the service) whether they thought deadweight had been very low, low, medium, high, very high and similarly for the other components.

Notice that these are very difficult questions to answer. At least in a medical trial if someone asks whether you are "feeling" better you can provide an accurate answer. Here, these managers are being asked to answer questions about whether jobs in the area would have been created anyhow (deadweight), whether this has been at the expense of jobs at other firms (displacement) etc. The whole exercise does feel rather circular - after all I was reading the RDA evaluations hoping to find answers to these very questions because I have no idea of the additionality of the interventions. 1,000 pages to find out that I was possibly getting an idea of someone else's best guess of the effects was, should I say, slightly disappointing (and may explain the length of this blog).

But, of course, the story doesn't end there because the PwC evaluation looks at the RDA evaluations which use the framework, incorporating the additionality guide which draws on the NRF report etc. So, it is possible that the RDA evaluations themselves have better evidence drawing on, say surveys of both beneficiaries and non-beneficiaries.

Fortunately, many of the evaluations are available on the web, so I downloaded all the evaluations from Advantage West Midlands that were used by PwC. There are 9 of them comprising another 1,000 or so pages of analysis. Here is what I found:
- Regeneration Zones (£280m): additionality based on interviewing 40 project managers responsible for 50 supported projects out of a total of 300 projects
- Land and property (£261m) surveys on 39 project deliverers, interviews with 12 property developers and surveys on 88/180 occupiers who moved to the new sites
- SRB (£218m) used national SRB figures - which, btw - also appear in the additionality guide and are based on interviews in 10 case study areas
- Clusters programme (£72m) surveyed 751 beneficiaries out of 10,930 businesses
- Skills (£47m) interviewed 80 of 6029 individual beneficiaries
- Rover (£36m) used national ready recknors (I think - I was getting tired by this point)
- Technology Corridors surveyed 210/2052 business, 40/237 tenants, 65/1615 individuals (all beneficiaries)
- PARD (£32m) 38 firms out of 592 assisted
- The Market Towns Initiative, Mercia Spinner and Midlands Excellence projects (all quite small) also interviewed beneficiaries.

In short all these reports are based on surveys or interviews with beneficiaries with no comparison to non-beneficiaries. After 2,000 pages I am no clearer on the additionality provided by RDA expenditure.

This is a little depressing because, to make it very clear, I think that the careful evaluation of policy is important. Increasingly, I begin to think the problem is that government funded evaluations of spatial interventions simply try to answer too many questions (this point may apply more generally). I think we need to focus effort on getting a proper understanding of the impacts on a smaller range of outcomes for the more major policy interventions. Less important interventions or outcomes could be subjected to something much more light touch. It could be cheaper, and it certainly should be more convincing on what works. Both important things in a time of budget cuts.

Talking of cuts, I am still no clearer on RDA effectiveness ...