Subscribe to our newsletter

Metrics and The Social Contract: Using Numbers, Preserving Humanity

26th July 2016
 | Mike Taylor

Image Mike Taylor blog post

Ever since Eugene Garfield first began to analyse citation patterns in academic literature, bibliometrics and scientometrics have been highly pragmatic disciplines. By that, I mean that technological limitations have restricted measurements and analyses to what is possible, rather than what is ideal or theoretically desirable. In the post-digital era, however, technological limitations are increasingly falling away and the problem has changed. Increasingly, we’re not limited by what we can measure but are challenged with the question of what we should measure and how we should analyse it.

There are now many more potential ways to derive metrics than ever before. Cloud computing has made terabyte scale calculations affordable and fast. Cloud research and open science will accelerate this trend.

As science and the process of science becomes more open, and funders increasingly show an interest in how their money is being spent, researchers are coming under ever increasing scrutiny. As individual researchers are subjected to greater accountability, they increasingly need quantitative and qualitative tools to help them demonstrate both academic and broader societal impact. In addition to new reporting burdens, as funding becomes ever more competitive, successful researchers must predict and plan the social, economic, cultural and industrial impact of the work that they do. This new aspect of academic career progression is a large part of what’s increasingly being called ‘reputation management’.

Whatever your point of view, metrics are becoming increasingly central to a researcher’s career, and we can expect to see an increasing level of interest in how they are calculated, what they mean, and of the relevance they have. This increasing importance can only progress if we see the development of a social contract between the various stakeholders in the research metrics environment.

  • Providers need to understand that the data, analysis and visualizations they provide have a value over and beyond a simple service.
  • Funders need to be responsible in the way that they use metrics, to resist the reduction of researchers’ careers to decimal points.
  • Researchers need to learn to use metrics to enhance the narratives that they develop to describe their ambitions and careers.

This begs the question of what role commercial organizations can play in the development of new metrics to meet these new researcher needs. How can we advance their adoption, understanding, and use?

Establishing the value of a metric

It seems like there are infinite ways to calculate metrics, even from a single source. A glance at the list of the H-index variants on Wikipedia shows over a dozen variations, each of them suggesting some benefit to this widely adopted metric. The methods by which a metric acquires the value necessary for adoption vary: a commercial organisation may invest in webinars, white papers, and blogs like this one. An academic organisation will invest in outreach efforts, conferences, research and publishing.

In both cases, the value of a metric is not derived from the relevance of the data or cleverness of the calculation. Instead, the value accrues as a consequence of the intellectual capital and understanding that users invest in it.

Metrics have to be more than an elegant measure of a specific effect or combination of effects. A successful metric also needs to be highly relevant in a practical way, while also being perceived as academically valid and not a commercial exercise in self-promotion.

Whether academically or commercially-driven, those of us who work in research metrics aspire to create tools that accrue value over the course of their lifespan. The overarching goal of scientometricians everywhere is  to create novel ways of understanding the dynamics of the scholarly world.

The innovation roadmap

Up until recently, scholarly metrics have been relatively simple and citation based. As I mentioned earlier, this is primarily due to the traditional technical limitations of print publishing. It is only within the last five years that we have started to see the meaningful emergence of non-citation-based metrics and indicators of attention.

As we progress to a point when the ‘alt’ falls from ‘altmetrics’ and more complex, broader measures of impact are seen as increasingly legitimate, we will see that there are many more useful and interesting ways to measure the value academic output in order to make meaningful policy decisions.

Citation and author-based metrics are well-embedded in the scholarly environment and are central to research evaluation frameworks around the world. Their incontestable value has accrued partially as a consequence of investment in research, product development and marketing – but mostly through their adoption by the research community. New data, technologies and techniques mean that the innovation roadmap for research metrics is much more complex than we have seen up to now.

One of the greatest challenges for researchers, bibliometricians and service providers will be to create a common framework in which the so-called alternative metrics can be used alongside legacy metrics.

The lack of correlation and the growth of advanced mathematical and technological techniques supports the belief that it is necessary to use multiple metrics to interpret any phenomena. As we develop new techniques and as open science makes more text available for mining, we can expect to see a move from metrics that require interpretation to calculate impact – in all its various forms – to semantic-based metrics, that offer a clearer understanding of impact.

Open science will drive innovation

All parts of the innovation process require significant investment: not only in obvious areas, like technology, data creation and capture, but also intellectually: both to develop metrics, but more importantly to develop and test use cases . By helping people understand and adopt the new metrics, we help update the social contract between the elements of the research community.

Policies that drive open science have had an enormous impact, and will continue to do so. Much of the work that the scientometric community are contemplating has been supported by innovations such as ORCID, CrossRef’s metadata API, and the various research data initiatives. Funders who continue to drive the research environment towards increasing openness are enabling this innovation.

Given the exciting possibilities that are being facilitated by these environmental changes, we predict that the rate of innovation will accelerate over the next five years..

However much technologists and academics innovate in this space, it is absolutely clear that the value will never be realized without the development of a social contract between metric innovators, research evaluators and academics.

The work of the stakeholder community in realizing the potential of these more sophisticated, broader measures of impact is as much about supporting and developing their use and acceptance as it is about mathematical and computing power.

Ultimately, we need to remember that metrics – whether quantitative or qualitative – are numbers about humans: human stories, human ambitions. For some people, the numbers will be enough. For some, their reputation will suffice. For others, numbers might only be useful as supporting evidence in the course of a narrative.

The academic world is a diverse world, and the role of metrics, and the social contract that develops should reflect this diversity.