Library corridor

How to assess research: some alternatives for Greece

A couple of weeks ago, I blogged about how research assessment in Greece, and perhaps other, similar, contexts, seems to foster predatory publishing. In brief, I wrote that the systems in place seem too inclusive, and they do not differentiate sufficiently between better publications and unambitious ones. As a result, I argued, producing a large number of low-quality publications seems to be the most efficient publishing strategy, at least if one’s aspirations do not extend outside Greece. In this post, I want to explore how things might be done differently. What follows is not intended as a fully worked-out solution. Rather, it’s an attempt to imagine different ways of assessing research output in Greek education and higher education.

Option 1: Ranking journals

The most straightforward solution, I think, would be to devise a journal ranking system that would differentiate between publications of different quality. This might be done by creating a master index with different categories of journals, and weighting publications according to the journal in which they appeared. The top category might comprise ISI-indexed journals, along with excellent regional journals that are not indexed due to language barriers. This might be followed by a second category, to include professional and student journals that operate a demonstrably rigorous system of peer review; and so on. At very least, there needs to be an index of bogus journals that should be excluded from consideration. This could be easily done, drawing on resources such as Beall’s list of predatory publishers. There are several advantages to such as system: To start, it’s efficient. Moreover, it should be easy to understand and accept, as it is an extension of rudimentary distinctions that are already made. Lastly, the system can be used to align research activity with strategic priorities: for example, if we can agree that promoting open access is a worthy goal, articles published in open access journals might be weighted more.

Although the previous proposal seems simple and intuitive, it has at least two important weaknesses. First, the correlation between journal quality and article quality is far from perfect. Or, to put it more simply, there are lots of great articles hidden in obscure journals, and the better journals do at times print unimpressive articles. Similarly, the assumption that you can rank research depending on the type of publication does not always hold up against scrutiny. Curt Rice, head of the Board for Current Research Information System in Norway, notes that:

…it’s a lot harder to get published in a good journal than in a good book. But I’m far less certain that it’s just as hard to get published in a bad journal as in a good book.

Lastly, any rankings produced by such a system would likely be skewed by disciplinary differences in publication norms. That is to say, different fields publish in different ways and at different rates. I understand that in rapidly changing fields, such as computer studies, conferences are the main venue for presenting new research; by contrast, in the humanities and social sciences, journal articles are considered much more valuable, and getting a paper published can take many months. When comparing output within a discipline, such differences might not come into play. However, they need to be accounted for when making cross-disciplinary comparisons, as was the case when my previous academic department, a language studies unit, was compared against other academic units, which focussed on accounting, tourism administration, or nursing.

Option 2: Appraising quality

Metric-based systems attempt to evaluate publications without having anyone actually read them.

The problem with such metric-based systems is that they attempt to evaluate publications without having anyone actually read them. A different possibility would be to take a qualitatively-oriented approach, such as the Research Excellence Framework (REF) in the UK. In such a system, academics (and teachers, if the government insists on its demand that teachers be evaluated for research activity) might submit a small number of publications for evaluation by a panel of experts. In the REF, publications were assigned categories, such as ‘internationally excellent’ or ‘recognised nationally’, using criteria like originality, significance, rigour and impact. For example, an ‘internationally excellent’ publication in education was defined as having the following features:

  • an important point of reference in its field or sub-field;
  • contributing important knowledge, ideas and techniques which are likely to have a lasting influence;
  • application of robust and appropriate research design and techniques of investigation and analysis, with intellectual precision;
  • generation of a substantial, coherent and widely admired data set or research resource.

There are many criticisms of the REF, including some focusing on its cost, which would make the wholesale adoption of the framework impractical for Greece. Besides, it would be injudicious to uncritically adopt an assessment instrument that was designed to assess a different academic system. Even so, I think that the underlying principle, i.e., focusing on quality descriptors rather than volume of output, is an alternative that needs to considered by the competent authorities.

And more?

As I said in the introduction, the purpose of this post was not to lay out a plan of what must be done in order to improve research assessment in Greece. Rather, what I wanted to do was show that alternative ways of assessing research output are possible. I have no doubt that others might have ideas that are more efficient, creative or practical than what I have suggested, and I would be very happy to read about such proposals. Despite the recent regime change in Greece, I am not terrribly confident that the systems currently in place will change any time soon. Nevertheless, I do think it important to raise awareness that the current systems are not the only, or the best, ways of conducting such assessments, and perhaps raise the question of why these systems have resisted change so far.


Featured image credit: Anna Creech (eclecticlibrarian @ Flickr) | CC BY-NC-SA

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s