Subscribe to our newsletter

Digital Science Joins DORA: A Commitment to Community and an Invitation to Innovate

28th August 2018
 | Daniel Hook

Our team has been involved in thinking about research metrics since before Digital Science itself came into existence in 2010. Besides the two authors of this piece, who have spent a significant portion of their professional lives developing tools to support and inform the academic community, Digital Science comprises more like-minded people who have been, and continue to be, driven to contribute positively to the cultural changes happening in the higher education information sector.

There are examples of this commitment all across the group, but to name just two colleagues; Euan Adie developed Altmetric, which encompasses a whole new way to think about the audiences that interact with research; and Mike Taylor, our Head of Metrics Development, has been a constant contributor to lively debate around metrics in research.

Dora signatory badge

The idea of the responsible use of metrics is not a new one within Digital Science, either. Indeed, several of Digital Science’s individual portfolio companies have been supporters of the Declaration on Research Assessment (DORA) for a while. However, until this point in time Digital Science itself has not been a formal signatory of DORA. That changed this week when Digital Science added its name to the list of international supporters of this worthwhile and critical effort and committed to working toward compliance with DORA in products and our approach to developing metrics.

With the release of the new version of Dimensions in January 2018, Digital Science is finally in a position to help the research community to achieve the innovation that we believe is vitally important to create and sustain a modern and healthy environment for research and researchers to thrive.

We believe:

  • Placing a piece of research in its context is critical to fully understanding it, its lineage, and its potential or realised outcomes and impacts;
  • Reading and taking the time to understand a piece of research continues to be the best way to evaluate scholarly work, but adding context to the reading experience can help the reader to gain a greater understanding of the relevance, importance and value of the research;
  • In cases where reading a piece of research is impractical due, for example, to volume or time considerations (such as when peer reviewers are involved in large-scale evaluation tasks), then it is even more essential to gain a nuanced view of the “location” of the research in question. Simply looking at either bare or normalised citation counts (or worse yet, journal metrics) hides the underlying distribution of the data and does not give any indication of the level of development of the field, the amount of funding of the field, or the proximity to application of the field. It also gives no indication as to whether the dataset in question is large enough to have statistics applied to it! Having access to these fundamental facts would be critical in the practice of research – why not in research evaluation too? Making data transparently and freely available for audit should be a critical part of research evaluation;
  • It is critical for researchers themselves to be able to have full visibility of the metrics that are being used to evaluate them and their work, and to understand and be able to access the elements or underlying data that contribute to these metrics.
  • The research community is best placed to create a set of open, auditable and appropriate metrics that may be used to evaluate their research. This means that the community needs the freedom to innovate and create new metrics using a data source that anyone can access and check for free.

We recognise that not everything that we do at Digital Science meets our own ambitions (stated above), but this is in part to do with experimentation. When you’re at the bleeding edge of developing any area, there will always be technical challenges and there are lots of opportunities to not get it right the first time!

We are always careful to consider the ethical implications of what we introduce through our tools and software, and are constantly aiming to improve the availability of data, the discussion around metrics, and the context in which metrics are presented — with a view to aiding their appropriate use.

Although we will of course always strive to be responsible in the metrics we deliver, we believe firmly that the research community is much better positioned to realise the DORA principles than a commercial organisation. It is from this community that we invite guidance and collaboration, with discussions and input as part of the NISO altmetrics initiative and development work with the NIH prior examples of this.

Now, Dimensions has been created to support such an agenda, and we feel that this is the ideal time to commit our wholehearted support for DORA. Dimensions data are freely available to bibliometrics and scientometrics researchers, and we warmly invite the broader research community to innovate by harnessing these data.

Our original concept for Dimensions was that it should be a data platform for the research community, available to everyone who is interested in improving ‘old’ metrics or developing new indicators (which, perhaps, go further than looking solely at citations).

In signing DORA, Digital Science is committing to ensuring that the Dimensions data are always made available for research purposes to anyone in the research evaluation ecosystem, to improving those data, and to working with the community to power their innovation.