Subscribe to our newsletter

Interdisciplinary Research: Do We Know What We Are Measuring?

5th December 2016
 | Katy Alexander

New Digital Research Report highlights inconsistencies in metrics analyses and a lack of consensus in defining ‘interdisciplinary.’

Today our Consultancy team have published a study of methodologies for the identification and assessment of interdisciplinary research. It follows a series of projects by UK Research and Higher Education Funding Councils into the nature of research and interdisciplinarity.

The report, “Interdisciplinary Research: Methodologies for Identification and Assessment” summarises a wider project commissioned by RCUK and is informed by a steering group that consisted of senior members of the Medical Research Council (MRC), the Arts & Humanities Research Council (AHRC), the Higher Education Funding Council for England (HEFCE) and expert advisers from SPRU, University of Sussex and INGENIO, Universitat Politechnica de Valencia.

interdisciplinary-research-report
The outcomes reveal that choice of data, methodology and indicators can produce seriously inconsistent results, despite a common set of disciplines and countries. This raises questions about how interdisciplinarity is identified and assessed. It also reveals a disconnect between the research metadata that analysts typically use and the research activity they assume they have analysed. The report highlights issues around the responsible use of ‘metrics’ and the importance of analysts clarifying the link between indicators and policy targets.

Commenting on the release of the report, Jonathan Adams, Chief Scientist at Digital Science, said:

‘Interdisciplinarity is important to discovery and innovation and adds significant impact to research outcomes. It is important to be able to identify interdisciplinary activity, know where it occurs and ensure it is properly assessed. Today that is not the case.’

Dr Ian Viney, MRC’s Director of Evaluation said:

‘We expected that some indicators might prove better for suggesting whether work was more or less interdisciplinary, but it was a surprise that some of the indicators gave such conflicting results. The report concludes that common assumptions made about the connection between research metadata and research activity may sometimes be flawed. There is interest in finding better quantitative indicators to support research assessment. However, we want to use metrics responsibly, which means carefully testing assumptions about what it is you are measuring.’

The report concludes that no single indicator of interdisciplinarity can be used alone and makes a number of recommendations:

  1. Quantitative indicators of interdisciplinarity should only be used for consistency checking, and in a framework that defines expectations.
  2. Any analyst of research interdisciplinarity should clarify their interpretation, the relevance of their data and the appropriateness of their data source and methodology.
  3.  Text analysis for research proposals and journal articles, either as abstracts or full document text, are a potential indicator of the research activity.
  4.  Research funders should include departmental affiliations of all principal investigators, to enable disciplinary diversity of research teams to be evaluated externally as well as internally.

The report will be presented at a conference to explore the interdisciplinary landscape of uK higher education, co-organised by HEFCE, RCUK and the British Academy on December 8th 2016.