Posted by Steve Gibbons, LSE and SERC
The season is here when next year’s school leavers start filling in their UCAS forms and applying to university. Yet, as any of us who know someone in this position will agree, picking a university is not always easy. For most subject areas, there are a large number of universities to choose from, and making a choice can involve a lot of research.
To help students with these difficult choices, The National Student Survey (NSS) was introduced in the mid-2000s to provide information about students’ satisfaction with their degree course. This survey has captured the attention of university lecturers and administrators, underpinned by concerns about the impact of scores on future recruitment. NSS scores are also one of several quality indicators used in the “league tables” published in newspapers and guidebooks. But do students really take any notice of satisfaction scores in making their university choices, or are other factors more important?
The first study to look directly at the effect of NSS scores on university applications was published yesterday as a SERC Discussion Paper. The research, carried out with my colleagues Professor Eric Neumayer and Dr Richard Perkins in the Department of Geography and Environment, links data on the NSS to applications data from UCAS for each subject and university from 2006 to 2011.
The study shows that NSS scores do matter for university applications. However, contrary to what many university managers might think, the effect of changes in NSS scores on demand for places is quite small. Moving from the bottom of the scale (around 65% satisfaction) to the top of the scale (about 95% satisfaction) will only result in a degree course gaining about seven more applicants for every 100 it already receives. And it isn’t the separately published NSS data themselves which matter. The timing of the effects of the NSS scores, relative to their publication, reveals that the NSS impact on student demand works indirectly by affecting university rankings in published league tables, such as The Times Good University Guide.
A likely explanation for this result is that leagues tables, which rank departments on a single scale, are more readily available and easily understood by applicants than more complex NSS data. In other words, the format in which information is presented to prospective students matters. But here too the effects are small: a 10 place move up a table of 100 universities only increases applications by around 2-3%. An improvement in position encourages a slightly more able pool of applicants (based on A- level tariff points). The effect of changes in league table position is also slightly bigger in subjects and places where there are more providers from which to choose.
Yet, across the board, student demand does not appear to be very responsive to improvements in NSS scores or league table positions. Instead, school leavers seem primarily to choose on a rather different set of criteria, related to more persistent factors (e.g. academic reputation, or distance from home - for which see my earlier work). None of this is very good news in the short-term for those universities who have invested significantly in measures to improve student satisfaction in the hope that they will profit from increased demand. The NSS may nevertheless be good news for students who benefit from these investments in terms of a superior educational experience.