I came across this brilliant paper in Proceedings of the National Academy of Sciences (PNAS) this weekend. An elegantly written plea for research to be assessed on its quality not the impact factor of the journal in which it is published. As the authors state ‘we must forego using impact factors as a proxy for excellence and replace them with in-depth analyses of the science produced.’ As the article outlines impact factors where developed by the Institute for Scientific Information (ISI) originally as an aid to librarians making decision about which journals to purchase. Today it is part of the decision making process for many academics that surrounds where to publish being held as a proxy for journal prestige. As the Eve and her colleagues point out ‘the least important paper published in a journal shares the impact factor with the most important papers in the same journal’, and therefore the impact factor of a journal may not accurately reflect the quality of all the work within it and as such is a flawed proxy.
You only have to go back a couple of years to find a fierce debate about the use of bibliometrics within REF2014, something which has been reduced in the final submission framework to a few select units of assessment where citation date will be used. In fact the REF codes make an explicit statement that quality assessment of an output will be made on the basis of the quality of the research not any perceived journal ranking system whether it be impact factors or the ABS list (Association of Business Schools). This is to be applauded, but can you take natural journal prejudices, based on things like the ABS list, impact factors or for that matter subject convention, out of the academics undertaking the reviews? Having now chaired one of our mock assessment panels I am left wondering whether you can? It will pose a serious challenge to the objectivity and veracity of the REF if one can’t.
Despite this reservation the plea made by Eve and her colleagues is to be welcomed; research should be published where it is best suited, will get read by the people who need to read it within ones discipline, where it will encourage debate and in turn drive further research. It does not make the decision of where to publish any easier for early career academics, but I would encourage all those involved in providing advice to them, to read the impassioned plea made by Eve and her colleagues and move from default references to impact factors and ranking lists.