Subscribe to our newsletter

Putting On Your Best Researcher Face

20th July 2016
 | Jonathan Adams

This post was originally published on the HEFCE website

Which publications best reflect your research achievements? And would you use different evidence to highlight your academic impact and your wider societal and economic impact?

binary-1327492_960_720

The UK’s Research Assessment Exercise (RAE) is characterised by the opportunity for researchers to select the evidence they present to peer panels, particularly the key ‘four research outputs per person’.

Not every national system works that way. For example, the first UK Research Selectivity Exercise in 1986 asked for full publication lists, and Excellence in Research for Australia requires universities to submit their entire research portfolio.

With the Research Excellence Framework (REF2014), we have a UK database that captures researcher choices in complementary assessments where the focus is on both traditional academic excellence plus an entirely new perspective created by references to underpinning research included in case studies of wider research impact.

Add to this the publications selected by researchers in RAEs between 1988 and 2013 and the available data add up to 921,254 submitted outputs and 36,244 case study references across 25 years and five assessment cycles.

What does this unique longitudinal dataset tell us about the way researchers made their choices in the early RAE cycles? How did these choices change over time? And how similar is the evidence used to support case studies to the evidence of research excellence?

Our report for HEFCE, published on 12 July 2016, is the first analysis to cover this material and tackle these questions.

Not surprisingly, such rich data throw up some fascinating results, including some unexpected patterns. Because of this, our commentary avoids over-interpreting the findings: we recognise that many cross-cutting cultural angles are captured and first impressions may well be wrong.

Consistent history of references

A key headline is the marked degree of overlap between the supporting references in the REF2014 case studies and REF2/RA2 submitted outputs. Although impact case studies were only instituted for REF2014, rather more than 40 per cent of the REF2014 case study references for every publication year back to 1996 had previously been submitted to RAE2001 or RAE2008 or were submitted in REF2014.

This suggests that these examples of research impact are built on a consistent thread of research previously selected as excellent in its own right.

Is ‘excellent’ research ‘impactful’?

This is the first time that evidence about the relationship between ‘excellent’ and ‘impactful’ research has been brought together.

There has been speculation in the past about whether publicly-funded research judged excellent by peers also then delivers wider benefits to the taxpayers who put up the money.

This result is a strong indication that it does and, while the overlap is higher in more applied areas, it is substantial throughout.

Outputs skewed towards recent publications

There are other patterns in the data that are likely to be a topic of conversation for researchers and managers.

We already had some idea that submitted output types shift towards journal articles across RAE cycles: out of conference proceedings for engineers, out of monographs for social scientists and out of media for arts. These patterns are now much clearer.

What we did not previously know was that the time-spread of submitted outputs is skewed to the most recent publication years in early RAE cycles. It looks as if people decided they were only as good as their last publication!

This skewed pattern – which was also described for case study references in a report by King’s College London and Digital Science on the nature, scale and beneficiaries of research impact (http://www.hefce.ac.uk/pubs/rereports/Year/2015/analysisREFimpact/) – was not reported by any previous analysis of RAE outcomes.

Yet, even more intriguingly, this very clear time-skew changes, but only for science and engineering, where it levels out at the same time and in the same way in later cycles. No change in skew is observed for social science or humanities.

Looking at these patterns, we suggest that change might have been driven by increasing citation awareness.

In the earliest cycle for which we have data (RAE1992), there are clear differences in output type between subject groups.  Later, there is convergence on journal articles.

The earliest cycles have choices markedly skewed towards recent publications. Later, in UOAs converging on journal articles, the skew flattens out and slightly older publications are selected.

It may have been that there increasing interest in and publication of citation analyses during the 1990s influenced both output choice and date selection. Journal articles are privileged by having citation data that books, proceedings and reports generally lack.

Citation data also make clear the cumulative attention given to older articles as well as to the latest discoveries. More work is needed to explore this.

The power of peer judgement

The choice of outputs for submission is made by about 50,000 researchers across 150 institutions. So, whatever the level of management oversight, the dataset is informed by lots of independent decisions.

Despite this, both the skewed submission patterns and the later changes are remarkably cohesive across disciplinary UOAs and across HEIs.

We see this as a signal of the hidden power of peer judgment: a remarkably uniform consensus about the selection of good evidence of achievement across a very diverse academic community. Metrics cannot approach this kind of experiential synthesis.

Case study references skewed towards recent years

There are other informative outcomes from the analysis.  We noted that the overall time-spread for case study references is skewed to recent years as for submitted outputs.

If we pick out just the earliest of the six references for each impact case study then the pattern changes: it is uniform across the full census period for science and engineering.

This may suggest a continuous flow of research discoveries that deliver impact, which is good news, but it also says that there are no fixed time periods between innovation and application.

That is a challenge to orthodoxies about the lag in the utility of research results. Earliest references remain skewed towards recent references for social science and humanities. That suggests a difference in the cultural relationship between research and impact in the sciences and arts.  There is evidently a lot more to understand.

So what do we know?

We cannot be certain why RAE submissions were time-skewed, nor why the skew changed, nor how the cohesion across disciplines and institutions is achieved. It certainly wasn’t a product of conscious coordination.

We don’t know enough about the origins of impactful research, but we do now know it is very diverse and that there seems to be a strong link between great research and great impact.

And we also have a series of findings that present science and engineering behaving consistently one way and arts, humanities and social sciences behaving consistently in a slightly but distinctly different way.

Very strong cultural traditions continue, they all deliver innovative and impactful research, and the one size that fits all is an unseen hand of shared collegial values about what constitutes excellence.