Subscribe to our newsletter

Thoughts from Digital Science’s US Publisher Day: The Rise of Predictive Analytics

3rd May 2016
 | Phill Jones

cupakes large (1)The Spring conference season is in full swing, April was a particularly big month for me. I’ve spent most of the last half of it in DC at a string of meetings and conferences. Starting with the Allen Press Emerging trends conference last Thursday, in which I rather ambitiously claimed I would tame the concept of open science for society publishers, and culminating with the STM Association annual conference. In the middle somewhere was the Digital Science Publisher Day, a one day event of talks and workshops on topics like the future of collaborative sharing and the use of metadata and metrics to solve business problems.

As part of the program, there were two speakers that publishers wouldn’t commonly hear from: namely Digital Science’s own Simon Porter, VP of Academic Relations and Knowledge Architecture and Dr Kirk Baker from the Office of Portfolio Analysis at the NIH.

Simon kicked the day off with a tone-setter about the changing role of academic institutions in scholarly communication. Through the lens of research management administration, he explained how institutions have responded to changes in expectations in accountability and reporting by implementing tools that enable them to collate and analyse their own research output. As Simon puts it, ‘A piece of academic output isn’t static, it moves through systems and processes, picking up extra information like data and metrics’. Until relatively recently, there was no way for institutions to understand or measure that journey. Today, Current Research Information Systems (CRISs) and webometrics analysis tools enable them to understand much more about the impact of the work they facilitate. This enables them to not only curate their output and identify the most interesting pieces, but also measure the effects of strategic decisions on their ability to fulfil their mission and get more research funding.

So, institutions are using web-based metrics to support strategic decision making. Hold that thought.

Kirk Baker from the NIH gave a fascinating presentation about the work he and his colleagues do at the Office of Portfolio Analysis (OPA). Amongst other things, the office has developed tools to enable NIH research funding managers to better understand and prioritize emerging trends in research. Kirk and his colleagues are able to measure things like the time lag between mentions of a new idea in grant applications and the first mentions of that idea in the published literature. By linking the ideas in the grants to their eventual publications, it’s possible to track their eventual impact. Just like the institutions that Simon spoke about, the OPA can complete the information cycle by monitoring the downstream effect of strategic decisions.

This all made me think about the recent ‘ask the chefs live’ panel that I participated in at the London Book Fair. The panel was organized and moderated by Ann Michael, CEO of Delta Think, and also featured David Smith of IET, Alice Meadows of ORCID and Robert Harrington of the American Mathematical Society. During that panel, we were asked what, in our opinion, are the two most important technology trends in scholarly publishing. One of my answers was ‘unique identifiers’. In my opinion, the rise of identifiers like ORCID, DOI, fundref and so on, enables meta-data analysis that both allows stakeholders to identify important outputs and also provides measures of success for policy decisions.

Returning to the Publisher Day, Phaedra Cress from the American Society for Aesthetic Plastic Surgery (ASAPS) also spoke. She explored the ways in which ASAPS are going beyond citations and downloads as a measure of the impact of their content. Phaedra explained how a varied suite of metrics can help a publisher better understand the context of attention, as well as emerging markets in terms of research fields, geography and demographics. According to Phaedra, as the technologies continue to develop, the key challenge that she faces is in helping the community, and in particular journal editors, to understand how to use the tools. In other words, helping people understand how to use technology is as important to innovation as creating the technology itself.

Funders and institutions are already making use of metrics and analysis tools to help them make strategic decisions and many publishers are not far behind. At Digital Science, we’ve worked with publishers to understand how they’re using data and metrics. I’ve noticed that while publishers are still using metrics tools as an author service, we’re seeing more and more use of them for internal analysis, for example to select content for anthologies or monitor the demographics and engagement level of end-users.

On the other hand, as Phaedra pointed out, it’s not always easy for editors to envision how useful these sorts of analyses can be. For example, consider the work that Kirk Baker and colleagues are doing at the NIH OPA.  If it’s possible to track the downstream impact of emergent ideas in grant applications, then the reverse should also be true and it would be possible to predict emerging trends from analysing the information in grants.

I think that we’re going to see more metrics used for internal decision making by institutions, funders and publishers alike. It will be interesting to see how this new trend develops.