Call for Papers: Third ASSE International Conference on British and American Studies

One of the first conferences in which I presented as a novice PhD student, and one which I remember very fondly, was organised by a group of academics in Vlora, Albania. If memory serves, these academics later became founding members of the Albanian Society for the Study of English (ASSE), and have been very active ever since in organising regular conferences, and editing a journal, in esse, which publishes articles on English literature and linguistics. It therefore gives me considerable pleasure to write about the next conference they are organising:


The Third ASSE International Conference on British and American Studies will be held on 26-28th November 2015 in Tirana Albania. The conference, which is organised in collaboration with the Corporate Training and Continuing Education Center at Canadian Institute of Technology, will focus on the impact of technology on literature, communication, translation, linguistics and language education.

Keynote speakers include:

  • Prof. Marina Bondi (University of Modena and Reggio Emilia, Italy)
  • Prof. Albert Doja (University of Lille, France)

Call for papers:

Submissions are invited for papers (20 minutes) on a range of topics, including but not limited to:

  • Representations of technology in language, literature and culture
  • Technological realities and fantasies in literature
  • The role of literature in a world of technology
  • Science and literature: bridging the gap
  • Human imagination and technology
  • How technology is transforming language, literature, and culture
  • Applications of technology in linguistics
  • The influence of technology in human communication
  • ICT in language learning and teaching
  • Educational technology in English Studies: Issues and trends
  • Pedagogical innovations in education
  • Instructional design and innovative pedagogy
  • Online learning, distance learning, e-learning, blended learning
  • Technological literacy
  • Emerging educational technologies

Abstracts (about 250 words) should be sent as MS word attachments to wt[at]assenglish[dot]org by 15 July 2015.

Selected papers will be published in society’s journal in esse: English Studies in Albania, Vol. 6, No. 1 and 2.

Important dates

Deadline for abstract submission: 15 July 2015
Notification of acceptance: 20 August 2015
Early registration deadline: 15 September 2015
Late registration deadline: 30 September 2015

Featured image by by Dungodung (CC-BY-SA-3.0), via Wikimedia Commons

On being misquoted

Although I am not a statistician, through some quirk of Google’s search algorithm, it appears that I have become promoted to a go-to internet expert on Likert scales. This is sometimes awkward, especially when a less-than-perfect blog post is cited in a peer-reviewed publication, but I can live with that. On the other hand, I tend to be somewhat more frustrated when my writings are misunderstood and misquoted – and the purpose of this post is to set the record straight after one of these instances.

Likert
Phelps et al. (2015) Pairwise Comparison Versus Likert Scale for Biomedical Image Assessment. American Journal of Roentgenology 204(1), 8-14.

It was recently brought to my attention that my views on Likert scaling have been cited by Dr Carolyn J. Hamblin in her PhD thesis (or dissertation, to go by US usage). In the methodology chapter, Hamblin states that “[s]ome scholars, such Kostoulas (2015), asserted that any numerical calculation applied to the data [produced by Likert scales] are [sic] invalid in all cases.” (p. 57). After a “comparison of medians and interquartile ranges (Kostoulas, 2015) with means and standard deviations” (p. 58), Hamblin concludes that it’s quite safe to ignore my recommendations, since her calculations (mean and standard deviation) produced similar results with mine (median and interquartile range) most of the time.

Before engaging with Hamblin’s argument in a more substantive manner, I want to correct a minor point. The in-text citations to Kostoulas (2015) are, as far as I can tell, in reference to two distinct blog posts written in 2013 and 2014. Of these, only one in listed in the bibliography, with an incorrect date and URL.

Likert2
Hamblin, C. J. (2015) How Arizona Community College Teachers Go About Learning to Teach. Unpublished PhD Thesis, Utah State University. http://digitalcommons.usu.edu/etd/4283 (p. 118)

Moving on to a less trivial issue: I never stated that Likert scale data cannot be subjected to any kind of numerical calculation. I have emphatically claimed that “ordinal data cannot yield mean values”, which is, I should think, an uncontroversial thing to say. I have stated that, in my opinion, Likert-type items produce ordinal data, but I have also written that Likert scales (which are composites of several items) allow for more flexibilityElsewhere, I have explained that:

Some very well-designed Likert scales can, indeed, produce data that are suitable for calculating means, or running statistical tests that rely on the mean. These scales are the product of careful weighting and extensive testing across large numbers of respondents.

In all, I think that the selective presentation of my writings in Hamblin’s thesis does little justice to either my views or her research.

This is not the only instance where Hamblin is being disingenuous. Further in the same paragraph, she writes that: “Grace-Markin (2008) argued that under certain circumstances numerical [I think she means “parametric”] calculations are acceptable. The scale should be at least 5 points, which is what this survey used.” Readers may want to read this statement against what Grace-Markin actually recommends:

At the very least, insist that the item have at least 5 points (7 is better), that the underlying concept be continuous, and that there be some indication that the intervals between points are approximately equal. Make sure the other assumptions (normality & equal variance of residuals, etc.) be met.

That is to say, Grace-Markin suggests that the data produced by Likert scales can be used in parametric calculations, as long as at least five criteria are met (multitude and equidistance of points, construct validity, normality and equal variance of data). Of these, Hamblin ignores the final four and re-interprets the one that remains to fit her research.

So, what is one to do when they find out that their work has been distorted through careless reading and ‘refuted’ though selective and creative recourse to the literature? At minimum, one can always repeat Alan Greenspan’s quote: “I know you think you understand what you thought I said, but I’m not sure you realise that what you heard is not what I meant”. In addition to that, one feels compelled to register profound frustration at the variability of what is considered to be doctoral work across the world.


Featured Image by Michael Kwan [CC BY-NC-ND]

Using YouTube to communicate research findings

A frequent, and fair, criticism of academic research is that it is often inaccessible, either because of the way in which it is written, or because it is often locked away behind paywalls. As a result, alternative formats, such as academic blogs, social media and podcasts have become increasingly popular, but they are still far from mainstream and are sometimes viewed with scepticism.

Maria Jesus Inostroza, a PhD candidate from the University of Sheffield, recently showed me yet another original and creative way to share her research: an animated YouTube video. Maria has been using Complexity Theory to understand the challenges faced by ELT professionals in Chile, and in the video that follows she talks about her PhD research.

I thought it was quite interesting, and it certainly motivated me to learn more about her research. (And, to be perfectly honest, I also felt slightly envious that my digital skills are not quite as sophisticated as hers. There, I’ve said it!). So, what do you think about this way of reaching out? Can it be used to disseminate research findings in more detail, or is it best used as an attention-getting technique? Can, and should, universities encourage researchers to experiment more with such alternative formats? Are there any possible implications for the ways in which junior researchers are perceived?


Featured Image by Patrick Breitenbach @ flickr, CC-BY-2.0 https://www.flickr.com/photos/29205886@N08/2743534799/

Call for Papers: NATESOL 2015

The Northern Association of Teachers of English to Speakers of Other Languages (NATESOL) is a Manchester-based teacher’s association that has been supporting the professional development of English Language Teachers since 1984. They hold several events throughout the year, including an annual conference. This year’s conference, will be be held on Saturday 20th June 2015 at Salford City College, and if you happen to be around the NorthWest of England at the time, I very strongly recommend attending.

The theme of this year’s conference is ‘Continuous and continuing: Professional Development through Teaching, Learning and Research’. To quote the conference organisers:

Professional development should be at the heart of every teacher’s practice, but it can have many different stimuli. Teaching itself can be a developmental process if we experiment with new ideas and reflect carefully on the outcomes. Sharing teaching ideas with our colleagues and others can also help us reshape our thinking. Secondly, our own learning experiences can be extremely fruitful, whether they are on formal courses specifically related to language teaching, at shorter sessions or conferences such as those offered by teaching associations like NATESOL, or through being learners ourselves, of another language or indeed of any subject. Finally, we can develop our knowledge and skills through research. If that is conducted in the classroom, the experiences of our own learners can be catalysts for our development. If it is through larger research projects, we can make a more wide-ranging contribution to the development of professional policy and practice.

Papers are invited on any aspect of professional or teacher development. Prospective presenters are asked to complete a proposal form, and return it to Mike Beaumont (michael.beaumont@zen.co.uk), copied to Keith Gould (k.gould@salford.ac.uk). The deadline for proposals is Friday 15 May 2015.


Featured image by Dungodung (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

Recently read: Sexist Peer Review; Non-replicable Science; and Flexible Job Searches

In case you missed them, here are some interesting academic news and posts from last week. Some topics I cover in this post include: (a) Can you get away with sexist remarks in peer review? (b) Is it really a problem if no-one can reproduce your research findings? and (c) what does it take to find a first academic job?

Is peer review sexist? Well, sometimes…

Articles sent to academic journals for publication are usually reviewed by academic experts, whose job it is to decide whether the article makes a useful contribution to the literature, and to suggest how it may be improved. However, not all recommendations are particularly helpful, as Fiona Ingelby found out last week. Dr Ingelby, who is a post-doctoral researcher at the University of Sussex, co-authored an article discussing gender effects on the professional development of scientists in her field. One of the reviewers of the article was, apparently, not impressed, and suggested that:

It would probably also be beneficial to find one or two male biologists to work with (or at least obtain internal peer review from, but better yet as active co-authors), in order to serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence into ideologically based assumptions.

Retraction Watch has more details on the story, including the information that the journal will no longer request the expertise of the reviewer in question. Writing on the same topic, Neurosceptic raises the question of editorial responsibility, and remarks that:

Yes, this peer reviewer, whoever they are, wrote a terrible review. But they didn’t send this review to Ingleby and [her co-author] Head. Reviewers don’t communicate with authors directly. The reviewer sent it to the editor who was handling the paper, and then he or she sent it to the authors. Quite simply the editor should have refused to accept this review, and should not have passed it on, commissioning another reviewer if necessary to make up the numbers.

It is unclear to me why the reviewers had knowledge of the author’s identity. Commonly, reviews like this are conducted under a double-blind system, meaning that manuscripts are given to reviewers anonymously, and reviewers are not required to sign their reviews.

In a formal apology issued by Damian Pattinson, the Editorial Director of PLOS ONE, which owns the journal in question, it is suggested that the solution to such problems is increased transparency in reviewing:

[W]e are working on new features to make the review process more open and transparent, since evidence suggests that review is more constructive and civil when the reviewers’ identities are known to the authors.

Whether that solves the problem of bias, in addition to the problem of civility, is something that remains to be seen.

More to read: Erik Schneiderhan, writing in the Chronicle of Higher Education, notes that traditional peer review tends to be unnecessarily mean. A balanced discussion of the question of anonymity can be found in this article by David Pontille and Didier Torny, which also looks into the history and variants of the peer review system.

Replicating research findings

Ideally, when a study is published in an academic journal, the methods and results should be described in such clarity as to allow other scientists to replicate the results. However, it is one of the dirty little secrets of science that not all the works published in the literature pass the replicability test. This is not always due to fraud, poor research design or unclear writing: some findings are unique to their context, and that is perfectly fine. On other occasions, however, especially controlled experimental studies, poor replicability is harder to explain.

According to a report recently published in Nature, a study that attempted to replicate the findings in 100 published psychology papers only managed to do so in 39 cases. Here’s an extract:

The results should convince everyone that psychology has a replicability problem, says Hal Pashler, a cognitive psychologist at the University of California, San Diego, and an author of one of the papers whose findings were successfully repeated.  “A lot of working scientists assume that if it’s published, it’s right,” he says. “This makes it hard to dismiss that there are still a lot of false positives in the literature.”

There are, I think, two main implications, if the findings of the study are to be taken at face value. The first one is that it is increasingly hard to be confident about any finding, if it’s published in a single paper. This is something perhaps worth remembering when engaging with reports in the press about the latest “ground-breaking” discovery.

The second implication is that many research findings appear to be bound to a specific context, and that subtle changes in the parameters of the study might alter the findings entirely. Here’s an example, from the same report:

One non-replicate was an Israeli study about factors that promote personal reconciliation. The original study posed a scenario involving a colleague who had to miss work because of maternity leave or military duty. But vignettes prepared for a replication study in the United States involved a wedding and a honeymoon.

To me at least, this seems to suggest a need to move away from unconvincing and misleading claims to generalisability, and to focus instead on understanding the particularities of specific contexts.

More to read: It has been suggested that the problem of replicability is symptomatic of a broader crisis in science. Perhaps disturbingly, belief in certain findings seem to persist even after repeated failed attempts to replicate them, argue Kimmo Eriksson and Brent Simpson.

Finding your first job

Changing the topic – here’s a timely article I came across as I was procrastinating preparing my application portfolio. Nick Hopwood, whose blog is a treasure trove of sound advice on research and academic life, reminds prospective job applicants of the need to be flexible as they look for their first academic job:

It’s probably safest to assume the following: the academic job closely related to your doctoral work almost certainly doesn’t exist, at least not on your continent, and if it does, someone else will probably get it anyway. […] It will help if you are flexible. […] You might have to be ready to move away from where you did your doctoral studies – geographically. Not only are some funders very keen on post-doctoral movement between institutions, some are rather wary of a narrowness that might result from staying put for too long. […] You might have to be ready to move away from your topic. The number of jobs with the title you’d give your ideal postdoc is probably zero. The number of jobs you could do is considerably larger. […] You might have to be ready to teach almost anything. Including stuff you really don’t know.

And on that note, I think it’s time I went back to preparing my job applications. After this latest post of mine, I think it’s a safe bet that I needn’t bother sending one to the University of Athens…


Image Credit: The Leaf Project @ Flickr | CC BY-SA 2.0 

Αχιλλέας Κωστούλας Ιστοσελίδα και Ιστολόγιο

Follow

Get every new post delivered to your Inbox.

Join 2,820 other followers