Recently, Scholarly Open Access, an authoritative blog that tracks the activity of predatory publishers, issued a warning about The International Journal of English Language, Literature & Humanities, a fraudulent journal that seems to target ELT professionals. In what is, sadly, a very common practice, the journal offered to publish articles in four days (!) in exchange for a $100 article processing charge. The journal promised that the articles would be peer-reviewed (clearly a false claim, given the time-frame involved), which would help authors further their career plans, or at very least flatter their vanity.
Jeffrey Beall, the author of Scholarly Open Access, notes:
I am seeing an increase in the number of questionable open-access journals on TESL. There are many TESL professors around the world, including many needing to publish to earn tenure and promotion.
In the same post, he attributes the proliferation of predatory publishers to the fact that the criteria for assigning academic credit in some higher education systems are too inclusive. This is an insight consistent with my experience: in the paragraphs that follow, I shall present some examples showing how academic publications are used as assessment instruments in Greek education and higher education. This paves the way for a discussion about how things might be done differently, which will be the topic of a future post.
Assessing university lecturers
When I used to work in the Epirus Institute of Technology, from time to time we were required to submit a form listing our scholarly output for the past five years (below). The form had different columns for books, refereed journals, non-refereed journals, contributions to refereed conference proceedings, and non-refereed ones, chapters in edited collections, refereed conference presentations, non-refereed conference presentations, and other publications. Each of these categories was assigned a different number of points, and their sum was used (along with other criteria) to rank adjunct lecturers, who competed for a limited number of posts every semester. If I am not mistaken, the points were also used, collectively, to compare university departments.
There are two things to note about this assessment grid. Firstly, it appears to have been designed to showcase the volume of research output at the university. This was achieved by including, in the list of publications, research output that barely meets scholarship criteria (e.g., non-refereed conference presentations). At the time, there were plans under way to restructure the Higher Education system in Greece, mainly by merging or abolishing under-performing departments, so it was important to project the impression of a vibrant scholarly community. Secondly, although there was some attempt to differentiate research according to quality, the criteria used seemed to discriminate best among the least ambitious scholarly contributions: for instance, there were four different types of conference outputs. By contrast, the top categories conflated many types of very dissimilar publications, e.g., scholarly monographs, trade books, textbooks and self-publications were all listed as ‘books’, and ‘refereed journal articles’ did not distinguish between ISI-indexed journals, graduate student journals, predatory publishers, and in-house journals that had been set up in some academic departments to print otherwise unpublishable work. It seems that there were not enough publications in these categories to warrant different categories.
Because of the way the points were awarded, the most efficient publication strategy was to produce a large number of publications in a short amount of time. Typically, this involved compressing preparation time, and submitting to journals that were not too selective, had quick turn-around times, and could claim to be ‘refereed’. In other words, the system created a niche that predatory publishers were quick to fill.
The new teacher assessment framework in the Greek state education system seems to suffer from the same weakness. The framework uses analytical ranking criteria to assign teachers to ranks such as ‘exceptional’, ‘very good’, ‘adequate’ and ‘deficient’. One of the criteria, ‘scientific development’ (επιστημονική ανάπτυξη), is assessed by taking into account “contributions to conference proceedings”, “articles published in refereed journals” and “books authored or edited, and editorship of conference proceedings” (Π.Δ. 152/2013 [ΦΕΚ 240/2013 τ.Α’], αρ. 6, παρ. 4, υποπαρ. vi, vii, xiii). In this case too, there seems to be no differentiation between various types of publication, and the design of the assessment instrument seems to reward volume, rather than quality, of scholarly output.
Fixing the system would mean that the preferred teachers wouldn’t be able to attain the promotion criteria.
During my tenure at the Ioannina Model/Experimental school, we had to pilot an early version of the assessment framework. On that occasion, I noted that the instrument was too blunt to be of much use, since virtually every article eventually gets published somewhere. At the very least, I suggested, there should be some provision for eliminating publications in journals that are known to be predatory operations. It was explained to me that the system was meant to encourage teachers to engage in research, even if such research was not ground-breaking; and besides, if some publishers were eliminated, then too few teachers would be able to reach in the top ranks of the assessment framework. It took me a while to understand that this argument really meant that the preferred teachers wouldn’t be able to attain the criteria.
Both examples above offers some insights as to why teachers might look to predatory publishers to further their career prospects. Although both examples are taken from the Greek context, I think that similar considerations may also apply to other, similarly structured, systems. This is, in my view, a problem for at least three reasons.
- First, this system does not sufficiently reward the best research output, and therefore promotes mediocrity. Put differently, there’s no incentive for a researcher to invest time, effort and funding into producing one solid paper, if they have to compete against colleagues who produce as many as four papers in a fortnight (sadly, this is not a made up example). While I very strongly believe in teacher-driven research as a driver for excellence, I feel that such culture of mediocrity can only undermine any benefits of research activity.
- Secondly, in the absence of rigorous quality standards, there is a danger of contaminating the scientific record with research that is useless, wrong, unethically obtained or even fabricated. Such academic misconduct can only result in a loss of trust in science and foster science denial; in the field of education, in particular, which is already beset by an unfortunate divide between ‘theory’ and ‘practice’, it is likely to increase scepticism towards academic work, and provide practitioners with an excuse for disregarding empirical evidence that challenges questionable pedagogical practices.
- Finally, I believe that most schools and teaching-oriented universities are already straining under the pressure of providing good quality instruction with ever-diminishing resources. Under the circumstances, it seems unethical to make hiring and promotion decisions conditional on a publication model that creates unrealistic output expectations for honest researchers, and profit opportunities for unscrupulous publishers.
I admit that I don’t have a fully worked out idea of how things might be done differently, but there are a number of directions one might pursue, if one were really interested in improving research assessment, at least in the Greek context. I hope to blog about them at some point in the near future, but in the meanwhile, if you have any ideas about how this might be done, or if you have similar examples of assessment practices that encourage predatory publishing, you are very welcome to leave a comment below.
Featured image credit: The Leaf Project @ Flickr | CC BY-SA 2.0