Subscribe to our newsletter

Information Infrastructures for Inclusive Science

13th September 2016
 | Guest Author

Ismael Ràfols, Ingenio (CSIC-UPV), Universitat Politècnica de València

The infrastructure for information on science and technology has a strong influence on the patterns of communication and the visibility of research. Thus scientific journals have shaped the production, circulation and consumption of knowledge since the birth of modern science in the 17th century.

In recent years, new forms of exchanging information have been developed, allowing the possibility of more ‘open science’ – a system of scientific communication that is more transparent, cheaper and more accessible to all researchers, stakeholders and the wider publics. However, to which extent is ‘open’ science also more ‘inclusive’ science? How can ‘open’ science facilitate wide access to information and knowledge that was previously marginalised in mainstream journals and databases?

Since the 1960s, the visibility of science was influenced by the bibliographic database of the Institute of Scientific Information (ISI) (now Web of Science). This database was built following Eugene Garfield’s notion that a small ‘core journals’ published most of the all the research of significance, and that the ISI database only needed to cover these to capture most relevant science.  These core journals of ‘international’ scope that ‘controlled’ most scientific communication were mainly published in a few Western countries. The databases were often used by managers to stratify science into high-quality cores (top quartile journals), second class science and ‘invisible science’.

Since the 1980s, researchers in the global south such as Hebe Vessuri and colleagues, and in some disciplines such SSH, have increasingly voiced discontent about Garfield notion of ‘core’, in particular about its consequences in terms of the invisibility of ‘peripheral’ journals and the effects of journal stratification on knowledge production. For example, there have been worries of suppression of research on topics relevant to developing countries or marginalised populations (such as tropical neglected diseases), in particular when they are published in local journals in languages other than English.

The great changes in information and communications technology (ICT) in the last two decades have facilitated the pluralisation of scientific information –and the addition of `alternatives’ to the mainstream journals and databases, such as Scielo or Redalyc that explicitly aim to fill in gaps in coverage. Moreover, the advent of open access technologies that can make ‘local’ journals accessible across the globe. Also new forms of science dissemination, such as blogs or Twitter, or new forms of publishing (e.g. data sharing), are making scientific information more diverse.

This succession of transformations would suggest that more ‘open’ science would also be more ‘inclusive’ – in the sense that it allows non-mainstream research to be accessed. However, as we have learned in the world wide web, accessibility is not the same as visibility. The high connectivity of the contemporary world might lead towards concentration rather diversification. For example, Larivière and colleagues recently showed a highly increased concentration of journals into an oligopoly of publishers in the last 20 years.

On 14-16 September, we are holding the 2016 S&T Indicators Conference, which this year focuses on indicators in the margins, in ‘peripheral spaces’ – i.e. the topics, geographies, disciplines, or groups that were overlooked or inappropriately represented by indicators. The flawed representation is often due to exclusion of science from developing countries or less ‘important’ issues from the information infrastructure.

The conference will aim to discuss, first, the diverse strategies for developing infrastructure with an open and comprehensive coverage and, second, the governance of the scientific information infrastructure in the face of new forms of communication.

In the first place, it has been shown that current multidisciplinary databases (Web of Science or Scopus) have a limited coverage while only databases that are specific to some sectors achieve a more comprehensive coverage (such CABI). However, most S&T indicators and benchmarking (e.g. UNESCO’s reports) are based on conventional ‘core’ databases. Should more comprehensive databases be developed, mixing different types of science – e.g. more ‘local’ and more ‘universal’? How should indicators of these databases be interpreted? How is open access best provided and maintained?

Second, the development of robust and publicly trusted indicators of the new scientific communication needs an open and transparent data infrastructure. What type of governance should be established for scientific data to ensure public critical analysis? Which types of organisations should manage the data? Should these be distributed or centralised systems?

Previous studies of standards and infrastructure have shown the deep political implications of apparently technical choices. If we aim to make science not only more open, but also more democratic and inclusive, we need to be highly reflective on how we develop the new information infrastructures.

You can follow the conference on Twitter, they are @STI2016VLC.

screen-shot-2016-09-13-at-10-38-02