Wednesday, 2 November 2011

London's (shocking?) growth performance

Writing about the RGF earlier in the week, I noted "if growth is the absolute priority then you begin to wonder whether the government might be better off dropping the 'R' from the Regional Growth Fund. The economics of that are difficult. On the minus side it might be more difficult to find projects in the 'south' where employment generation is genuinely additional. Offsetting this is the fact that a Growth Fund would expect to be generating those jobs at relatively more productive firms. Of course, while the economics might be difficult, the politics of such a change are far trickier."

I was somewhat surprised, therefore to see Ed Balls making a similar argument in the Evening Standard: "the Government must also act with extra care to safeguard the London economy. For a start, that means making sure London is not excluded from action to support jobs." (He was writing about the National Insurance Holiday, but the same logic could apply to the RGF)

According to Mr Balls, London needs help because "a report last week found London is no longer the fastest growing part of the country and in the past year it has seen the biggest rise in unemployment of any region." Somehow, this manages to make things sound considerably worse in London than is the case. For some period now, London's relative performance (both compared to other regions and to predictions) has been pretty good. I don't think the position has changed that much. Assuming that Mr Balls was referring to the latest BRES numbers they show London as the second fastest growing region after the South East. Anyhow, according to my colleague Ian Gordon, "June-June annual comparisons show London as having the fastest growth rate of any region in just 4 of the last 15 years. More than any other, but hardly a shock when it’s not in the top spot."

What about the unemployment numbers? Again, these are not that surprising because London has large numbers of people (young, lower skilled) who are doing very badly in this recession. In the aggregate, better outcomes for 'higher skilled' workers tend to outweigh the poor performance of 'lower skilled'. There is nothing much new here - those kind of polarised outcomes have characterised London for a long time.

So the relative economic performance of London doesn't actually provide that strong a case for further intervention. Instead, the arguments depend on the extent to which policy in London is actually able to generate additional jobs and what are the ultimate objectives of policy.

Monday, 31 October 2011

Regional Growth Fund (Round II)

The government's has announced the next round of projects receiving £950m from the Regional Growth Fund. We are told that this will directly create 37,000 jobs with a further 164,000 created indirectly ('in the supply chain').

As with round 1, with the details provided (severely curtailed by confidentiality requirements) it is impossible to provide any analysis of whether it will achieve this on the basis of the list of schemes agreed. Writing in 2005, SERC affiliate Colin Wren reviewed the available evidence on the impact of Regional Selective Assistance (a competitive scheme for allocating money to firms in depressed areas). The estimated cost per job ranged from £8,000-£21,000 (in 1995 prices). If the RGF of £950 million delivers 201,000 additional jobs that suggests a cost per job ‘created’ by the government of just over £4,700 (the same calculation for round 1 suggested 3,500 per job). In short, if these numbers played out, this would be a pretty effective intervention relative to existing schemes.

There are a number of reasons to think that these figures may be optimistic. First, with incomplete monitoring it is highly likely some of the 'leveraged' private sector funds ('£5 for every £1 of public money') would have been spent anyhow. To the extent that monitoring is imperfect, the RGF will only create additional jobs if it is being given to organisations that are credit constrained. Research that I have done with colleagues at the CEP suggests that this may only be true for smaller firms. We suspect this is because larger firms are better able to game the system (so monitoring is not as good) and are less likely to be genuinely credit constrained.

All of this suggests monitoring will be important for delivering additionality. Here, if I understood Nick Clegg correctly, the RGF is doing something different from the RSA. Specifically, when defending the amounts of money distributed so far he suggested that organisations that know they have the money coming have started activities. With RSA, my understanding was that usually firms need to receive the money first to demonstrate that public money is crucial to the project going ahead. This might suggest that additionality will be less for the RGF.

A separate issue is whether RGF will be more efficient than the Regional Development Agencies. Of course, it is impossible to tell at this stage. The RGF uses a different (competitive) mechanism for deciding on projects. This may lead to better decision making (or it may not). I would expect RGF to be more efficient per pound spent simply because it is spending less money. Civil servants may not be able to perfectly rank projects, but I don't believe that their selection is completely random, so the fact that the fund is smaller means it should achieve higher returns.

A final note of caution on the employment numbers - if all of government truly believed these numbers you might expect to see a lot more spending on RGF (unless they think that the smaller size of the scheme drives the high returns - as discussed above).

What about growth? Here I think there are further reasons to be cautious. In our work on RSA, we were able to find a causal effect of government money in increasing employment and investment, but not productivity. In addition, assisted firms are on average less productive, so RSA expands employment in less productive firms. This is still a 'growth' effect to the extent that these workers would have been unemployed (and we find some evidence, for RSA, that this might have been the case). But increasing the employment share of less productive firms may not be a good long run strategy for driving growth.

Indeed, if growth is the absolute priority then you begin to wonder whether the government might be better off dropping the 'R' from the Regional Growth Fund. The economics of that are difficult. On the minus side it might be more difficult to find projects in the 'south' where employment generation is genuinely additional. Offsetting this is the fact that a Growth Fund would expect to be generating those jobs at relatively more productive firms. Of course, while the economics might be difficult, the politics of such a change are far trickier.

Friday, 28 October 2011

Crime Maps

The government has added more crimes to its online crime maps.

The discussion on the Today programme centred around the extent to which the maps, together with new police commissioners might skew decisions on how to use police resources. Back in July, the worry was around whether this would skew incentives to report crimes.

The magnitude of both these effects is unknown. One thing that is certain, however, is that reported crimes have a big effect on house prices. To the extent that this is valuing the costs of crimes (at least to residents) then you would think it should have some bearing on the allocation of resources (independent of the mechanism through which this is achieved). Steve Gibbons interesting post from July has more details.

Wednesday, 26 October 2011

High Speed Fail

The Adam Smith Institute's 'high speed fail', represents the latest effort outlining the anti-side of the HS2 argument.

The Campaign for High Speed Rail has already responded: According to the FT the Campaign portrays the ASI's opposition to HS2 is “purely ideological, as they are fundamentally opposed to large-scale infrastructure investment [... begging] the question as to why such groups failed to also dismantle the case for projects such as Crossrail and the Jubilee Line extension, which were based on far lower financial returns.”

My overall position on HS2 remains unchanged - the costs of the project are large and I think that the money could be better spent. I am not, however, ideologically opposed to large-scale infrastructure investment. Indeed, I am more sympathetic to the case for Cross-Rail (and previously for the Jubilee Line extension). This is partly because I think that the (narrow) user benefit case for these latter two projects relies on less extreme assumptions about the growth in passenger numbers (and I don't remember them having 'far lower' CBA figures). But I am also more sympathetic because I think that the wider economic benefits (not captured by traditional analysis) are likely to be larger for schemes freeing up bottlenecks within our more successful cities. In contrast, I am not convinced that the wider economic benefits of HS2 will be large (and consistent with this I would prefer to see the money spent on within city transport schemes with better benefit-cost ratios).

In short, while I am sure that the ASI are perfectly capable of defending their own position, it is not contradictory to be supportive of some transport schemes and not others.

Monday, 24 October 2011

Millennium Villages and the Analysis of Place-Based Policies

Interesting to see the arguments about whether or not the Millennium Villages Project could have been better evaluated (either through random placement, or through careful attempts to identify suitable control villages). Part of the problem, according to those defending the project's approach, is that the use of more rigorous evaluation approaches is not possible for place-based policies. I agree with others that this argument is wrong. Evaluation of place-based policies using these approaches might be harder, but it is still feasible.

As a reminder - the major problem in evaluating the causal impact of these kind of schemes is what would have happened in the absence of the intervention. Random placement helps get round this because villages that are not chosen then provide a suitable comparison (this is the idea underlying many medical trials). Governments find random selection difficult because many policy makers assume that their interventions will be effective. Starting from that assumption, randomly selecting individuals to receive treatment is difficult because you have to deny treatment to others. If, in contrast, you start with the assumption that policy will be ineffective, then you are much more sanguine about allocating it randomly. In addition to this standard problem, it appears that place-based policy makers have even more trouble with randomisation of place-based policy.

But even in the absence of random allocation, place-based policies can still be evaluated by looking for suitable comparision groups (so that treatment is as good as random). For example, the UK government recently ran a competition to see which locations should get enterprise zones. In the first round of the competition 29 sites competed to host the final 10 enterprise zones. For those of us that like to think about the causal impact of urban policies this could be good news. As just discussed, when trying to figure out whether a policy has any impact, part of the problem is figuring out what would have happened in the absence of intervention. With these new EZs, the 19 sites that lose in the competition may provide a reasonable control group for the 10 that win. Comparing outcomes for the two groups may then tell us whether those that won EZs actually do better. We could also compare those that entered the competition to areas that appear to be similar but didn't enter the competition (to see whether those that entered the competition somehow differ from those that don't). The timing of EZs gives another avenue to explore. Those given money in the first round should start improving before those given money in the second. If they don't, that raises questions about whether EZ caused any improvement or instead whether this was caused by some other factor (say a strenghthing economy).

EZs are certainly not unique in this regard. The UK's Regional Growth Fund will not fund all projects that are submitted. Depending on how the decisions are made access to, say, the rankings of projects would allow researchers to compare outcomes for otherwise similar areas that were just above or below the bar when it came to getting funded. The Local Enterprise Growth Initiative had two rounds of funding (allows for the strategy of using the second round as a control group for the first) as well as a discrete cut-off for eligibility (so we can use areas that are 'just' ineligible as a possible control group). In addition, some LEGI applicants weren't funded. Finally, LEGI applied to discrete areas (local authorities) which are somewhat arbitrary in terms of the way the economy works - suggesting that comparisons across LEGI boundaries may provide useful information on the causal impact of LEGI (including whether or not there is displacement or positive spillovers - a worry in the Millennium Villages project). To take another example, the Single Regeneration Budget had multiple stages, successful and unsuccessful bids and some projects that targeted specific areas.

Official evaluations of place-based policies make little, if any, use of these programme features to help identify the causal impact of the policies. I can think of many reasons why governments may not like their policies to be effectively evaluated but how depressing is it to see economists making it easier for them to avoid being held to account by suggesting that rigorous evaluation of place-based policies is not possible.

Friday, 21 October 2011

NHS evidence: seriously flawed?

Posted by Steve Gibbons, SERC and LSE

You know your research has hit a nerve when it gets described as 'seriously flawed'. The last time this happened to me was when the Church of England complained about my finding that the apparent performance gap between faith and secular primary schools is due simply to the fact that they enrol higher-ability children. This time, it's some medical/public health researchers and campaigners complaining about my research on the effects of the 2006 policy to expand choice and improve competition between NHS providers in England (Cooper, Gibbons, Jones and McGuire 2011, an earlier version of which was published by SERC here). A letter appeared in the Lancet last week, and there have been previous rounds of lambasting in the media.

This research (and a related body of evidence from other teams) has been cited a lot by politicians to justify the current round of NHS reforms. This use of the evidence is what has motivated the quite vitriolic attacks to which the research has been subjected. These criticisms generally arise from ideological positions, prior beliefs, and dislike of the findings - not on any alternative evidence that the findings are wrong, nor on a serious evaluation of the methods we used or the evidence we have provided. The criticisms amount to assertions and opinions, based on a misreading or misunderstanding of the research. This is a pretty sad state of affairs, and disappointing for those of us who value scientific evidence and the importance of evidence-based policy making.

A more balanced reading of the research and serious engagement with what we actually did and wrote would, I hope, lead the reader to a more interesting finding. Allowing patients more choice over where they received elective treatment for hip replacements, cataracts and the like, had consequences for quality of care more generally – in our study, evidenced by improvements from survival rates from heart attacks. Our conjecture (drawing on other theoretical and empirical literature in the field) is that these effects occurred through general improvements in hospital management, for which there were sharper incentives in more competitive places.

Of course no empirical study is perfect, or can incontrovertibly establish causality – although we go a lot further than most to try to demonstrate causality. It is also quite right that our evidence should be subject to scrutiny, and we support peer review and open science. However, for those who don't believe our findings, the way forward should be to objectively look to see what is driving those findings, rather than dismissing our results out of hand.

For those interested we have published a detailed response to the criticisms in the Lancet article here.