Confusion in the Ivory Tower: Do We All Agree on What ‘Excellence’ Means?
First of all, an apology for the lack of a perspectives post last week. I had intended to write something while on vacation in Austin, Texas visiting family but the call of happy hour at Chuy’s and Antone’s ended up being just too loud. Those of you who know Austin know what I’m talking about.
While I was away, I had some time to think about some recent discussions I’ve had on the subject of research assessment. If you’re expecting a great epiphany in the next few 600 words or so, I’m afraid that you’re going to be disappointed. The only conclusion that I’ve been able to come up with is that it’s all a bit confusing. Put simply, various stakeholders seem to have different perspectives on how research assessment works currently and how it should work in the future. In order to move forward, we must first identify and then address a number of misunderstandings.
As I mentioned in my post of two weeks ago, it’s tempting for publishers to think that the reason scholarly communication hasn’t changed more quickly than it has, is because our ultimate customers, researchers and academics, simply don’t want it to. Many say that researchers are interested solely in traditional high impact articles. On the other hand, researchers seem frustrated with their dependence on that one narrow aspect of scientific communication and unsure quite who’s fault it is.
My post on researcher frustration, that I mentioned above, got some attention from the open science community over social media. One of the tweets that caught my eye was from Michael Markie (@MMMarksman), associate publisher at F1000, who said:
Spot on as usual @phillbjones I also find researchers would embrace innovation; they’re waiting for funders etc. to pave the way for change.
— Michael Markie (@MMMarksman) July 1, 2015
So perhaps, if it’s not researchers or publishers that are the gatekeepers to change here, maybe it’s the funders? Well, if you look at the history of funder mandates and research assessment, funders have been quite progressive in areas like open science and research assessment so I don’t think we’re really waiting for them to pave the way, they already are. At least, they are in terms of policy.
With all the talk about funders lately, another driver of academic behaviour seems to be receiving a bit less attention; namely the hiring, tenure and grant committees that are populated by senior academics themselves. We’ve touched on this topic on the perspectives blog before but I think it deserves more attention because it seems to be a major source of confusion. Going back a couple of years, Micheal Eisen responded to criticisms of his call to boycott high impact subscription journals by writing that in his opinion:
The widely held notion that high-impact publications determine who gets academic jobs, grants and tenure is wrong.
Eisen is certainly correct that the view is widely held by academics, I’m just not sure how wrong it really is. In a recent perspective on behalf of the FENS-Kavli Network of Excellence, Dr Tara Spires-Jones, wrote about the challenges faced at early and mid-career created by the pressure to publish in high impact journals, particularly in order to impress grant review panels. Which begs another question: If funders are saying that they want to assess performance differently, are the reviewers hearing that message? My current working hypothesis is that they’re not.
As part of one of the conversations that I cited in my previous post, a senior tenured academic told me that he needed to design his research programs so that his students and postdocs could get high impact articles. To do otherwise would be unfair to their career progression. Almost in the same breath, however, he told me that when he went up before the tenure committee a few years ago, it was the letters of support from international colleagues that were the key factor. So which is it, high impact factor publications or the respect of one’s peers that’s the important target for progression?
Part of it is that priorities change as a researcher moves through their career. Traditionally, a high impact article in Cell, Nature or Science can launch a researchers career. As a result, many senior faculty members tell their early and mid career mentees that while high impact papers are no longer so important to their own career progression as senior researchers, it’s the only sure fire route to academic success at earlier stages. The question that I’d like to know the answer to is: Is this still true today or are younger researchers being given advice that’s out of date?
I don’t know the answer to this, because tenure requirements, like snowflakes, are all different and not as transparent in practice as we’re told they are in theory. Take for example this advice piece from 2010 in the Chronicle of Higher Education, which is fairly typical of the kind of answer you get when talking to people involved in tenure assessment. The article talks a lot about the organizational aspects of the committee and clearly emphasizes letters of support, there’s no definition of what constitutes high achievement or how that is measured.
The sense I get from senior academics and the administrators is they think of ‘high quality research’ or ‘teaching achievements’ as self evident, as if you can’t really explain what those terms mean, but you know it when you see it. The perception amongst more junior academics is that these are code words for publishing lots of high impact papers. The reality is that research quality might be defined in a number of ways, as this pathway to impact page from EPSRC neatly explains. An example of a good start is this page from the National Institutes of Health, showing their criteria for tenure for intramural researchers. The first bullet point here doesn’t fully define research quality, but it does at least state that it should include scientific rationale and experimental design.
I believe that academia itself needs to do a better job understanding its own goals and ideals when it comes to research excellence. At the same time, those involved in setting policy for research assessment need to do more to inform and educate both senior and more junior academics about how they want quality to be assessed. Without this vital step, academics will continue to make the same assumptions about what is considered valuable, while progressive assessment policies will fail to have their full effect.