Library corridor

“Impact factor is a scam”, argues Curt Rice

Curt Rice, the head of the Board for Current Research Information System in Norway (CRIStin), recently published an interesting article on his blog, discussing the uses and abuses of the impact factor. This is reproduced, by kind permission, below:


 

Quality control in research: the mysterious case of the bouncing impact factor

Research must be reliable and publication is part of our quality control system. Scientific articles get reviewed by peers and they get screened by editors. Reviewers ideally help improve the project and its presentation, and editors ideally select the best papers to publish.

Impact factor is a scam. It should no longer be part of our quality control system.

Perhaps to help scientists through the sea of scholarly articles, an attempt has been made to quantify which journals are most important to read — and to publish in. This system — called impact factor — is used as a proxy for quality in decisions about hiring, grants, promotions, prizes and more. Unfortunately, that system is deeply flawed.

What is impact factor?

A journal’s impact factor is assigned by Thomson Reuters, a private corporation, and is based on the listings they include in their annual Journal Citation Report.

To calculate the impact factor for a journal in 2014 we have to know both the number of articles published in 2012 and 2013 and the number of citations those articles received in 2014; the latter is then divided by the former. If a journal publishes a total of 100 articles in 2012 and 2013, and if those articles collectively garner 100 citations in 2014, then the impact factor for 2014 is 1. Björn Brembs illustrates it like this.

Slide2

Impact factor in the humanities

Impact factors in the natural sciences are much higher than those in the humanities. Journals in medicine or science can have impact factors as high as 50. In contrast, Language, the journal of the Linguistic Society of America, has an impact factor under 2 and many humanities journals are well under 1.

If impact factor indicates readership, this may be accurate. Journals in medicine or science may well have 50 times the readership of even the biggest humanities journals. But when impact factor accords prestige and even becomes a surrogate for quality, this variation can give the impression that the research performed in medicine and the sciences is of a higher quality or more important than the research performed in the humanities. I would wager that many political debates at universities are fed by such attitudes.

Fortunately, the explanation for low impact factors in the humanities is much simpler. While articles in top science journals often consist of a few pages, the ones in the humanities are more likely to be a few dozen. Naturally, it takes more time to review or revise a long article. As a result, many top journals in the humanities use 2-3 years from initial submission to publication. 2-3 years! This means that the window of measurement for impact factor calculation is often closed before a paper is even cited once.

What counts as an article?

Impact factor can be changed in two ways and both of them get gamed sometimes. One option is to increase the number of citations. Editors have been known to practice coercive citation, as I wrote about in How journals manipulate the importance of research and one way to fix it.

The second way to increase impact factors is to shrink the number of articles in the equation. In addition to articles, journals might include letters, opinion pieces, or replies. These are rarely cited, and sometimes editors have to negotiate with Thomson Reuters about which of them should be excluded from the count. The impact factor game provides an amusing description of this process.

Current Biology saw its impact factor jump from 7 to almost 12 from 2002 to 2003. In a recent talk, Brembs reveals how this happened.

In the calculation of the impact factor for Current Biology in 2002, it was reported that the journal published 528 articles in 2001. But for the calculation in 2003 — for which 2001 is still relevant — that number had been reduced to 300. No wonder the impact factor took a hop! They can’t both be right and I wouldn’t be surprised if negotiations were involved.

Slide1

We must build an infrastructure for research that delivers genuine quality control. Ad hoc windows that treat different fields differently and systems in which importance gets confounded with commercial interests cannot be part of this system.

And if we succeed in finding new ways to determine quality, impact factor will surely get bounced.

Many of the points in Brembs’ speech, When decade-old functionality would be progress: the desolate state of our scholarly infrastructure, deserve the attention of those who think about making scientific communication better; in addition to the slides, Brembs and his colleagues Katherine Button and Marcus Munafò have an important paper called Deep impact: unintended consequences of journal rank, which I’ve also discussed at The Guardian in Science research: 3 problems that point to a communications crisis. Brembs’ speech was made at the 2014 Munin Conference at the University of Tromsø.


This article was originally published on Curt Rice – Science in Balance. Read the original article. | The featured image is by Anna Creech (eclecticlibrarian @ flickr) and is shared under a CC BY-NC-SA license.

Republish Counter

Comments

2 responses to ““Impact factor is a scam”, argues Curt Rice”

  1. Hi Curt – to expand on our brief Twitter discussion: I agree that there may be some evidence that this particular proprietary manifestation of the concept of an impact factor is at best opaque and at worst potentially open to manipulation. Nonetheless, the concept of an impact factor is a useful one in many contexts.

    We should not confuse the bibliographic concept (that the number of citations a journal receives divided by the number of papers it publishes over a similar time period can provide a useful comparative measure of the relative importance of siad journal when compared to others in its field) with a commercial product that employs the concept.

    The title of your post suggests that using ‘impact factors’ in any way is a deceitful thing to do; I disagree, although you will find many academics who hold that position for ideological reasons, which is fine. If the title of your article were “JCR Impact Factors ™ are a scam”, I might be more inclined to agree with your premise (though not of course the potentially litigious wording thereof which is given here for entertainment and educational purposes only and in no way represents the author’s opinion on said product &c and so forth).

    To repeat something I had to express in <100 characters: most people agree that spreadsheets are a good idea, but many people find MS Excel very difficult to use; most people would agree that a 'portable document format' that allows easy sharing of camera-ready copy is a good idea, but find Adobe products most distasteful. This does not render spreadsheets and/or "pdfs" bad in and of themselves. The same should apply to the "impact factor", as opposed to the "Impact Factor".

    Hope that makes sense now. Cheers – Phil.

    1. Hi Phil,

      Thanks for sharing these interesting thoughts, which add nuance and depth to the discussion of IFs. Such insights are very welcome here, but it might be easier for Prof. Rice to respond to any feedback if it were posted directly to his blog, where this post originally appeared (follow the link at the beginning of the post).

      Cheers!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Achilleas Kostoulas

Subscribe now to keep reading and get access to the full archive.

Continue reading