It suddenly occurs to me that I am in danger of appearing contradictory. I complain that I can't learn a lot from the cross-cutting RDA evaluation but then I suggest that its an example of an area where systematising evaluation across subnational organisations may have big payoffs.
Let me clarify - I think systematic evaluation is better than no systematic evaluation. My major problem with the cross-cutting RDA evaluation is the reliance on user benefit surveys to calculate additionality. The recent practical guide from BIS notes: "To assess the net impact of an intervention, information is needed on the situation that would exist both with and without the intervention. The standard approach is to use a beneficiary survey – asking questions on the impact [...] and getting beneficiaries to estimate what would have happened otherwise. There are clearly limitations with this methodology. A more robust approach is to compare the change in activity and outputs of beneficiaries before and after the intervention against the achievements of a control group (i.e.: people/businesses that would have been eligible for support but did not receive it). However, results are dependent on identifying an appropriate control group, which is not possible in many cases. [...] In addition, a control group approach is usually more expensive than beneficiary surveys. As a result, using a survey of beneficiaries is generally the preferred approach when balancing costs and benefits of the two methods."
I agree with much of this. But the point is that when you are talking about spatial policy there are complex interactions that mean the impacts extend beyond the direct beneficiaries. Further the beneficiaries have absolutely no idea about the nature of these complex interactions. As a result beneficiary surveys are often not a great way of evaluating spatial policy. To be clear, hopefully better than nothing, but not great.
Quite simply, when you are spending large amounts of money on spatial policy (collectively the RDAs have spent around £15bn) by all means use "light touch" beneficiary evaluation for some (even most) of it. But (i) you shouldn't read too much in to the results and (ii) at least some of your evaluation work needs to try and tackle these issues through more robust analysis.