Student consulting dictionary and writing on notebook

Recently read: Sexist Peer Review; Non-replicable Science; and Flexible Job Searches

In case you missed them, here are some interesting academic news and posts from last week. Some topics I cover in this post include: (a) Can you get away with sexist remarks in peer review? (b) Is it really a problem if no-one can reproduce your research findings? and (c) what does it take to find a first academic job?

Is peer review sexist? Well, sometimes…

Articles sent to academic journals for publication are usually reviewed by academic experts, whose job it is to decide whether the article makes a useful contribution to the literature, and to suggest how it may be improved. However, not all recommendations are particularly helpful, as Fiona Ingelby found out last week. Dr Ingelby, who is a post-doctoral researcher at the University of Sussex, co-authored an article discussing gender effects on the professional development of scientists in her field. One of the reviewers of the article was, apparently, not impressed, and suggested that:

It would probably also be beneficial to find one or two male biologists to work with (or at least obtain internal peer review from, but better yet as active co-authors), in order to serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence into ideologically based assumptions.

Retraction Watch has more details on the story, including the information that the journal will no longer request the expertise of the reviewer in question. Writing on the same topic, Neurosceptic raises the question of editorial responsibility, and remarks that:

Yes, this peer reviewer, whoever they are, wrote a terrible review. But they didn’t send this review to Ingleby and [her co-author] Head. Reviewers don’t communicate with authors directly. The reviewer sent it to the editor who was handling the paper, and then he or she sent it to the authors. Quite simply the editor should have refused to accept this review, and should not have passed it on, commissioning another reviewer if necessary to make up the numbers.

It is unclear to me why the reviewers had knowledge of the author’s identity. Commonly, reviews like this are conducted under a double-blind system, meaning that manuscripts are given to reviewers anonymously, and reviewers are not required to sign their reviews.

In a formal apology issued by Damian Pattinson, the Editorial Director of PLOS ONE, which owns the journal in question, it is suggested that the solution to such problems is increased transparency in reviewing:

[W]e are working on new features to make the review process more open and transparent, since evidence suggests that review is more constructive and civil when the reviewers’ identities are known to the authors.

Whether that solves the problem of bias, in addition to the problem of civility, is something that remains to be seen.

More to read: Erik Schneiderhan, writing in the Chronicle of Higher Education, notes that traditional peer review tends to be unnecessarily mean. A balanced discussion of the question of anonymity can be found in this article by David Pontille and Didier Torny, which also looks into the history and variants of the peer review system.

Replicating research findings

Ideally, when a study is published in an academic journal, the methods and results should be described in such clarity as to allow other scientists to replicate the results. However, it is one of the dirty little secrets of science that not all the works published in the literature pass the replicability test. This is not always due to fraud, poor research design or unclear writing: some findings are unique to their context, and that is perfectly fine. On other occasions, however, especially controlled experimental studies, poor replicability is harder to explain.

According to a report recently published in Nature, a study that attempted to replicate the findings in 100 published psychology papers only managed to do so in 39 cases. Here’s an extract:

The results should convince everyone that psychology has a replicability problem, says Hal Pashler, a cognitive psychologist at the University of California, San Diego, and an author of one of the papers whose findings were successfully repeated.  “A lot of working scientists assume that if it’s published, it’s right,” he says. “This makes it hard to dismiss that there are still a lot of false positives in the literature.”

There are, I think, two main implications, if the findings of the study are to be taken at face value. The first one is that it is increasingly hard to be confident about any finding, if it’s published in a single paper. This is something perhaps worth remembering when engaging with reports in the press about the latest “ground-breaking” discovery.

The second implication is that many research findings appear to be bound to a specific context, and that subtle changes in the parameters of the study might alter the findings entirely. Here’s an example, from the same report:

One non-replicate was an Israeli study about factors that promote personal reconciliation. The original study posed a scenario involving a colleague who had to miss work because of maternity leave or military duty. But vignettes prepared for a replication study in the United States involved a wedding and a honeymoon.

To me at least, this seems to suggest a need to move away from unconvincing and misleading claims to generalisability, and to focus instead on understanding the particularities of specific contexts.

More to read: It has been suggested that the problem of replicability is symptomatic of a broader crisis in science. Perhaps disturbingly, belief in certain findings seem to persist even after repeated failed attempts to replicate them, argue Kimmo Eriksson and Brent Simpson.

Finding your first job

Changing the topic – here’s a timely article I came across as I was procrastinating preparing my application portfolio. Nick Hopwood, whose blog is a treasure trove of sound advice on research and academic life, reminds prospective job applicants of the need to be flexible as they look for their first academic job:

It’s probably safest to assume the following: the academic job closely related to your doctoral work almost certainly doesn’t exist, at least not on your continent, and if it does, someone else will probably get it anyway. […] It will help if you are flexible. […] You might have to be ready to move away from where you did your doctoral studies – geographically. Not only are some funders very keen on post-doctoral movement between institutions, some are rather wary of a narrowness that might result from staying put for too long. […] You might have to be ready to move away from your topic. The number of jobs with the title you’d give your ideal postdoc is probably zero. The number of jobs you could do is considerably larger. […] You might have to be ready to teach almost anything. Including stuff you really don’t know.

And on that note, I think it’s time I went back to preparing my job applications. After this latest post of mine, I think it’s a safe bet that I needn’t bother sending one to the University of Athens…

Image Credit: The Leaf Project @ Flickr | CC BY-SA 2.0 

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s