Writing effective questionnaire items in language education research (Part 2)

In my previous blog post about questionnaire construction, I wrote about the pitfalls of using long and complicated items, double-barrelled questions and negatively phrased items. In this second instalment to the series, I shall discuss three more ways to make questionnaire items more effective. In this post, we will look into:

  1. How to avoid bias
  2. How to avoid confusing responses
  3. How to avoid irrelevant questions
Embed from Getty Images

Avoid bias

Shortly before the 2012 national elections in Greece, there were reports in the press of a survey (carried out by a senior university professor, no less), which included questions such as the following:

If one of the major contending parties, which have been responsible for the decline of your standard of living, promises a better future, will you trust them and vote for them again?

It should be obvious that such questions encourage a certain answer, which is great if you are doing a survey to provide a veneer of statistical credibility to your pre-conceptions. However, if you want to be able to project findings from your sample to a population, then the questions you use must be as neutral as possible.

Apart from using loaded language, as in the example above, some other kinds of bias-inducing items, which you should be on the lookout for, include:

  • Leading questions, such as  “Would you be in favour of a new external evaluation procedure for teachers, as a means for countering widespread underperformance?” (from an internal survey conducted by the Greek Ministry of Education in October 2012)
  • Prestige questions, such as “Have you ever read about ‘language learning strategies’?” (found in a questionnaire addressed to language teachers). Such questions trigger ‘social desirability bias‘: this means that many respondents answer in ways that make them appear in a positive light, regardless of factual accuracy.

If an item is potentially problematic for any of the reasons above, you should either consider withdrawing it from the survey, or re-phrase it in such a way as to minimize bias.

Embed from Getty Images

Use closed-response items intelligently

Closed-response items, in which respondents have to choose one of the options provided by the researcher, are often used in questionnaire surveys, on account of the fact that they facilitate coding and analysis. However, unless carefully constructed, such items run a risk of containing non-exhaustive or overlapping responses.

Non-exhaustive lists

A non-exhaustive list of categories is one in which the options provided by the researcher do not cover all the potential responses, and therefore restricts the diversity of data that the survey generates.

Non-exhaustive lists are problematic for at least two reasons. At minimum, they result in a lack of analytical detail. More importantly, they might lead to misleading results. Here’s an example: among other data, schools in Greece ask students what languages they prefer to study as part of their Modern Foreign Languages provision. Typically, students are asked to choose between French and German (in addition to English, which is taught by default). However, in April 2013 my colleagues and I added an open field to the form (“Other language: please explain…”), and we found that French was actually a much less popular option than Spanish, Italian, Albanian and Chinese. These data were noted by the local school authorities, and we were strongly advised to use the standard, closed-response form in future surveys.

Category overlap

Another possible problem with poorly-designed closed response items is category overlap. The following is an example I have come across far too often:

What is your age (circle):  20-30,  30-40,   40-50,  50-60,  60+

Faced with these options, respondents at the borders of the categories (e.g., 40-year-olds) would have to select two categories (in practice, I think most of us would opt for the lower one!).

Sometimes, category overlap is less obvious: James Brown (2001: 48) tells a story about a questionnaire where educators were asked what percentage of their time they spent in various activities, including ‘classroom teaching’ and ‘teacher training’. This created a lot of confusion, because as it turned out, some of the respondents happened to be university lecturers involved in teacher education, and for them the two categories were identical in meaning.

How to avoid such problems

There are two strategies one might employ to avoid these types of problems: One is to pilot the questionnaire extensively (asking a friend or family member to go over it is a good start, but rarely enough to spot all the potential problems). Secondly, unless your categories are logically exhaustive, you should always include an “other” option, where unanticipated responses can be recorded.

Embed from Getty Images

Avoid irrelevant questions

A hallmark of poorly-constructed questionnaires is that they tend to include large numbers of items that are not relevant to all individual respondents. In addition to wasting the respondents’ time, irrelevant questions can damage rapport and undermine one’s credibility as a researcher, and this can make participants reluctant to engage with the questionnaire.

In one extreme case, a questionnaire addressed to English Language teachers in Greek public schools contained a full page of questions on the listening component of a newly introduced coursebook: the teachers were asked to comment on the quality of the recordings, the relevance of the texts, the density, relevance and teachability of the vocabulary, and they were invited to provide suggestions for the improvement of the listening component. What the researcher did not know, however, was that the listening materials had never been produced due to funding cuts, and she rather unfortunately became the target of more than a few vitriolic remarks, in which respondents vented their frustration against the policy planners.

Branching

One way to avoid asking irrelevant questions is to use branching, which directs respondents to the questions that are more relevant to them. Here’s an example:

5. Have you used the European Language Portfolio in class?   YES  NO
(if you have answered NO, please proceed directly to Question 9)

Branching instructions should be concise and clear, and —in my experience— it helps to use formatting options such as bolding, larger fonts and the like to attract the respondents’ attention. It is also good practice to organise the questionnaire in a linear way, so as to avoid backtracking. Used judiciously, branching can significantly reduce questionnaire completion time, and it also creates the reassuring impression that the researcher has a good understanding of how complicated the topic can be.

Parallel versions

Another strategy is to use parallel versions of the questionnaire, which are directed to different respondent profiles.

In a recent study, my colleagues and I were interested in finding information about the after-school clubs organised at a certain school. We felt that it would be useful to compare the perspectives of participating pupils and those of their parents, so we needed to address both groups of respondents. However, we were also aware that parents would not have direct experience of the actual activities in the club, and that pupils would be unable to comment on the quality of parent-school communication, both of which were important to our research. Rather than use branching, which we felt would be too complicated, we designed two questionnaires (one addressed to parents and one to pupils), which had overlapping sections for comparison, as well as more focused sections specifically addressed to each group of respondents.

When using parallel versions of a similar questionnaire, I have found it helpful to clearly identify the questionnaire version on every sheet of paper (notice the “M” on the top-right corner of this document), or to use coloured sheets of paper in order to readily identify versions.

More to read

For more advice on writing effective questionnaire items, I recommend consulting the following resources:

  • Brown, J. D. (2001). Using surveys in language programs. Cambridge: Cambridge University Press. (pp. 44-55)
  • Cohen, L., Manion, L. & Morrison, K. (2007). Research methods in education (6th edn.). New York: Routledge. (pp. 334-336)
  • Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. Oxford: Oxford University Press. (pp. 102-109)

I hope that you found the advice in this post helpful. You might also want to take a look at the following posts on questionnaire design:

If you arrived here while preparing for a student project, I wish you good luck with your work. You may also want to use the social sharing buttons at the end of the post to share this content with other students who might find it useful. If you have any other questions that I might be able to answer, feel free to ask by posting a comment or using this form.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.