Oxbridgewich and shaky tables

The Oxbridge application deadline for 2013 entry is tomorrow (!) and aspirational students can soon begin the long wait to find out if they’ll be attending the UK’s most prestigious institutions. But according to UCAS application data from 2011, Oxford and Cambridge combined received less applicants than the University of Greenwich, and this is despite a difference of around 100 places in one of the UK league tables. Greenwich even got more applicants per place, approximately 7:1 compared with Cambridge’s 5:1 and Oxford’s 6:1. So, overall the UoG must be more popular and more selective than these world-renowned institutions, should we now be referring to Oxbridgewich?

Well, probably not. I was cheating by combining Cambridge and Oxford as you can only apply for one or the other without a degree, and due to Greenwich’s relatively lower entry standards, it was likely used as a safety application for some who would go on to accept a higher-grade offer (I don’t mean to single out UoG, by the way). My point is that if seemingly obvious metrics like applications per place aren’t really indicators of an institution’s standards, what can we use instead? Well, there are several well-known league tables but each uses different methodologies; QS, for example,  relies heavily on surveying academics whereas the Guardian’s effort makes more use of the National Student Survey. Unsurprisingly they give varying results. The Telegraph’s Andrew Marzsal wrote on this subject last week, saying:

“Although nominally answering the same question, they don’t share a methodology, a data set or indeed a winner […] in fact the wildly differing outcomes of these tables make them more, not less, useful.”

University rankings: which world university rankings should we trust? (Oct 4 2012)

He justifies the latter argument by referring to expected strengths or weaknesses of the different table methods, implying if you’re interested in x, then you want table y. I wasn’t completely convinced by this, and browsing the existing tables shows remarkable year-on-year fluctuations: The above distributions show the changes in rank from 2012 to 2013 in over a hundred UK universities. The highest of these jumps was a rise of 38 places (from 82→44 for Brunel University in the Guardian’s table)—in a single year! Other big leaps include a fall of 29 places for Leeds Trinity University College, and a 30 rank rise for Birmingham City (both from the Guardian’s table). These aren’t small universities either, BCU took on over 5,000 new undergraduates in 2011. Further there’s no significant change in the analytics that produced the rankings (for the Guardian: “The methodology employed in the tables has generally remained very constant since 2008” [doc]). So how could academic standards, quality of research, job prospects or any other considered metric vary so widely in 12 months at these universities? Are the student surveys that fickle?

In a related vein, I wanted to look at the correlation of university rankings across different tables but discovered it has already been done in some detail by Sawyer, K. et al. (2012) in “Measuring the unmeasurable” (also the inconsistent naming amongst tables is probably too big an ask for my regex abilities). The authors found that while high-ranking institutions were well-correlated, those lower down the tables were not. They go on to analogise with financial markets and make somewhat fluffy generalisations about the validity of inference… but nevertheless the correlation analysis seems valid. To see if this differential treatment of lower-ranked institutions held in the 2012-13 change data, I did a simple linear regression analysis: As you can see, while the regression line itself looks an unconvincing fit, it had a significantly non-zero coefficient (0.064 ± 0.012, p = 1.18 ×10-6). The amount of variance explained by this trend would be a not-uninteresting 19%, so it does generally seem more unstable down there, or at least it is in this snapshot of the THE table. As evidence in favour of the usefulness of tables, a principal components analysis, again using world rankings [pdf], concluded that the variable with the highest explanatory power was indeed academic performance (R2 = 0.48), though this study didn’t stratify high and low-ranking educational bodies. In light of the above result, it seems likely that a subset of lower-ranked universities may have a different principle component.

Overall I’m not making an argument against the use of these tables, I know I relied on them when picking out my UCAS choices, but it seems likely that while Oxbridge may have a gentlemanly back-and-forth over the top spots for years to come, the University of Greenwich and its ilk will probably be flying all over the place, and proclamations of ‘this year’s most improved rank’ (e.g. [1], [2][3]) should be viewed with particular scepticism.

Advertisements

Leave a comment

Filed under Musings

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s