How are we encouraging predatory publishers?

Recently, Scholarly Open Access, an authoritative blog that tracks the activity of predatory publishers, issued a warning (link no longer active) about The International Journal of English Language, Literature & Humanities (I used to have a link to them as well, but I decided they don’t deserve one), a fraudulent journal that seems to target ELT professionals. In what is, sadly, a very common practice, the journal offered to publish articles in four days (!) in exchange for a $100 article processing charge. The journal promised that the articles would be peer-reviewed (clearly a false claim, given the time-frame involved), which would help authors further their career plans, or at very least flatter their vanity.

Jeffrey Beall, the author of Scholarly Open Access, notes:

I am seeing an increase in the number of questionable open-access journals on TESL. There are many TESL professors around the world, including many needing to publish to earn tenure and promotion.

In the same post, he attributes the proliferation of predatory publishers to the fact that the criteria for assigning academic credit in some higher education systems are too inclusive. This is an insight consistent with my experience: in the paragraphs that follow, I shall present some examples showing how academic publications are used as assessment instruments in Greek education and higher education. This paves the way for a discussion about how things might be done differently, which will be the topic of a future post.

Assessing university lecturers

When I used to work in the Epirus Institute of Technology, from time to time we were required to submit a form listing our scholarly output for the past five years (below). The form had different columns for books, refereed journals, non-refereed journals, contributions to refereed conference proceedings, and non-refereed ones, chapters in edited collections, refereed conference presentations, non-refereed conference presentations, and other publications. Each of these categories was assigned a different number of points, and their sum was used (along with other criteria) to rank adjunct lecturers, who competed for a limited number of posts every semester. If I am not mistaken, the points were also used, collectively, to compare university departments.

Research assessment grid
Research assessment grid

There are two things to note about this assessment grid. Firstly, it appears to have been designed to showcase the volume of research output at the university. This was achieved by including, in the list of publications, research output that barely meets scholarship criteria (e.g., non-refereed conference presentations). At the time, there were plans under way to restructure the Higher Education system in Greece, mainly by merging or abolishing under-performing departments, so it was important to project the impression of a vibrant scholarly community. Secondly, although there was some attempt to differentiate research according to quality, the criteria used seemed to discriminate best among the least ambitious scholarly contributions: for instance, there were four different types of conference outputs. By contrast, the top categories conflated many types of very dissimilar publications, e.g., scholarly monographs, trade books, textbooks and self-publications were all listed as ‘books’, and ‘refereed journal articles’ did not distinguish between ISI-indexed journals, graduate student journals, predatory publishers, and in-house journals that had been set up in some academic departments to print otherwise unpublishable work. It seems that there were not enough publications in these categories to warrant different categories.

Because of the way the points were awarded, the most efficient publication strategy was to produce a large number of publications in a short amount of time. Typically, this involved compressing preparation time, and submitting to journals that were not too selective, had quick turn-around times, and could claim to be ‘refereed’. In other words, the system created a niche that predatory publishers were quick to fill.

Teacher assessment

The short-lived teacher assessment framework in the Greek state education system seems to suffer from the same weakness. The framework uses analytical ranking criteria to assign teachers to ranks such as ‘exceptional’, ‘very good’, ‘adequate’ and ‘deficient’. One of the criteria, ‘scientific development’ (επιστημονική ανάπτυξη), is assessed by taking into account “contributions to conference proceedings”, “articles published in refereed journals” and “books authored or edited, and editorship of conference proceedings” (Π.Δ. 152/2013 [ΦΕΚ 240/2013 τ.Α’], αρ. 6, παρ. 4, υποπαρ. vi, vii, xiii). In this case too, there seems to be no differentiation between various types of publication, and the design of the assessment instrument seems to reward volume, rather than quality, of scholarly output.

Fixing the system would mean that the preferred teachers wouldn’t be able to attain the promotion criteria.

During my tenure at the Ioannina Model/Experimental school, we had to pilot an early version of the assessment framework. On that occasion, I noted that the instrument was too blunt to be of much use, since virtually every article eventually gets published somewhere. At the very least, I suggested, there should be some provision for eliminating publications in journals that are known to be predatory operations. It was explained to me that the system was meant to encourage teachers to engage in research, even if such research was not ground-breaking; and besides, if some publishers were eliminated, then too few teachers would be able to reach in the top ranks of the assessment framework. It took me a while to understand that this argument really meant that the preferred teachers wouldn’t be able to attain the criteria.

So what?

Both examples above offers some insights as to why teachers might look to predatory publishers to further their career prospects. Although both examples are taken from the Greek context, I think that similar considerations may also apply to other, similarly structured, systems. This is, in my view, a problem for at least three reasons.

  • First, this system does not sufficiently reward the best research output, and therefore promotes mediocrity. Put differently, there’s no incentive for a researcher to invest time, effort and funding into producing one solid paper, if they have to compete against colleagues who produce as many as four papers in a fortnight (sadly, this is not a made up example). While I very strongly believe in teacher-driven research as a driver for excellence, I feel that such culture of mediocrity can only undermine any benefits of research activity.
  • Secondly, in the absence of rigorous quality standards, there is a danger of contaminating the scientific record with research that is useless, wrong, unethically obtained or even fabricated. Such academic misconduct can only result in a loss of trust in science and foster science denial; in the field of education, in particular, which is already beset by an unfortunate divide between ‘theory’ and ‘practice’, it is likely to increase scepticism towards academic work, and provide practitioners with an excuse for disregarding empirical evidence that challenges questionable pedagogical practices.
  • Finally, I believe that most schools and teaching-oriented universities are already straining under the pressure of providing good quality instruction with ever-diminishing resources. Under the circumstances, it seems unethical to make hiring and promotion decisions conditional on a publication model that creates unrealistic output expectations for honest researchers, and profit opportunities for unscrupulous publishers.
stack of books in shelf
Photo by Pixabay on Pexels.com

Can this situation be fixed?

I admit that I don’t have a fully worked out idea of how things might be done differently, but there are a number of directions one might pursue, if one were really interested in improving research assessment, at least in the Greek context. What follows is not intended as a fully worked-out solution. Rather, it’s an attempt to imagine different ways of assessing research output in Greek education and higher education.

Option 1: Ranking journals

The most straightforward solution, I think, would be to devise a journal ranking system that would differentiate between publications of different quality. This might be done by creating a master index with different categories of journals, and weighting publications according to the journal in which they appeared. The top category might comprise ISI-indexed journals, along with excellent regional journals that are not indexed due to language barriers. This might be followed by a second category, to include professional and student journals that operate a demonstrably rigorous system of peer review; and so on. At very least, there needs to be an index of bogus journals that should be excluded from consideration. This could be easily done, drawing on resources such as Beall’s list of predatory publishers (update: currently defunct). There are several advantages to such as system: To start, it’s efficient. Moreover, it should be easy to understand and accept, as it is an extension of rudimentary distinctions that are already made. Lastly, the system can be used to align research activity with strategic priorities: for example, if we can agree that promoting open access is a worthy goal, articles published in open access journals might be weighted more.

Although the previous proposal seems simple and intuitive, it has at least two important weaknesses. First, the correlation between journal quality and article quality is far from perfect. Or, to put it more simply, there are lots of great articles hidden in obscure journals, and the better journals sometimes print unimpressive articles. Similarly, the assumption that you can rank research depending on the type of publication does not always hold up against scrutiny. Curt Rice, head of the Board for Current Research Information System in Norway, notes that:

…it’s a lot harder to get published in a good journal than in a good book. But I’m far less certain that it’s just as hard to get published in a bad journal as in a good book.

Lastly, any rankings produced by such a system would likely be skewed by disciplinary differences in publication norms. That is to say, different fields publish in different ways and at different rates. I understand that in rapidly changing fields, such as computer studies, conferences are the main venue for presenting new research; by contrast, in the humanities and social sciences, journal articles are considered much more valuable, and getting a paper published can take many months. When comparing output within a discipline, such differences might not come into play. However, they need to be accounted for when making cross-disciplinary comparisons, as was the case when my previous academic department, a language studies unit, was compared against other academic units, which focussed on accounting, tourism administration, or nursing.

Option 2: Appraising quality

Metric-based systems attempt to evaluate publications without having anyone actually read them.

The problem with such metric-based systems is that they attempt to evaluate publications without having anyone actually read them. A different possibility would be to take a qualitatively-oriented approach, such as the Research Excellence Framework (REF) in the UK. In such a system, academics (and teachers, if the government insists on its demand that teachers be evaluated for research activity) might submit a small number of publications for evaluation by a panel of experts. In the REF, publications were assigned categories, such as ‘internationally excellent’ or ‘recognised nationally’, using criteria like originality, significance, rigour and impact. For example, an ‘internationally excellent’ publication in education was defined as having the following features:

  • an important point of reference in its field or sub-field;
  • contributing important knowledge, ideas and techniques which are likely to have a lasting influence;
  • application of robust and appropriate research design and techniques of investigation and analysis, with intellectual precision;
  • generation of a substantial, coherent and widely admired data set or research resource.

There are many criticisms of the REF, including some focusing on its cost, which would make the wholesale adoption of the framework impractical for Greece. Besides, it would be injudicious to uncritically adopt an assessment instrument that was designed to assess a different academic system. Even so, I think that the underlying principle, i.e., focusing on quality descriptors rather than volume of output, is an alternative that needs to considered by the competent authorities.

And more?

As I said in the introduction, the purpose of this post was not to lay out a plan of what must be done in order to improve research assessment in Greece. Rather, what I wanted to do was show that alternative ways of assessing research output are possible. I have no doubt that others might have ideas that are more efficient, creative or practical than what I have suggested, and I would be very happy to read about such proposals. Despite the recent regime change in Greece, I am not terrribly confident that the systems currently in place will change any time soon. Nevertheless, I do think it important to raise awareness that the current systems are not the only, or the best, ways of conducting such assessments, and perhaps raise the question of why these systems have resisted change so far.


Update (September 2018): I consolidated this post by merging two previous posts that had been published separately. I removed broken links. I contemplated how little things have changed since the original posts were written and decided I needed a drink.


Comments

2 responses to “How are we encouraging predatory publishers?”

  1. Nadezda avatar

    Your observations are impeccable. The situation very much alike here, the points and everything, five year contracts,… As for the solution – you agree it is the matter of the entire system? Top to bottom. If you are even slightly familiar how the system works (from the inside) – unfortunately, my impression is – yes, all that you are saying is acknowledged by everyone involved, no problem to that, but what is needed, alas!!, is for the individual persons in that chain of making decisions – do what the entire academia community is telling them to do. That is the dark zone. There is no ‘interest’ in not doing it, just the ‘comfort’? of the inaction. I know I am being harsh, but you touched upon the painful issue of the working class academia. And soooo much real good work is lost, not given chance …

    1. I think we both agree then, that this is not a problem with a simple solution. But I do think it is moderately encouraging that such issues are increasingly being discussed, and that misconceptions about research output are being discussed. Such consciousness raising stops short of a solution, no doubt, but it may be a first important step.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Achilleas Kostoulas

Subscribe now to keep reading and get access to the full archive.

Continue reading