Achilleas Kostoulas

Published

Revised

Writing effective questionnaire items in language education research (Part 2)

This post contains practical advice that will help you to formulate and sequence the items in your questionnaire. It is mainly addressed to students working in language education, but much of the content can be applied in most of quantitative research.

Check list document on smartphone, smartphone with paper check list and to do list with checkboxes, concept of survey, online quiz, completed things or done test, feedback.

Writing effective questionnaire items in language education research (Part 2)

This article is part of a series of blog posts with advice on writing effective questionnaires for language education research. In my previous blog post about questionnaire construction, I wrote about the pitfalls of using long and complicated items, double-barreled questions and negatively phrased items. This second installment to the series discusses three more ways to make questionnaire items more effective.

I have written what follows primarily for language teachers and students in applied linguistics, TESOL/ELT and similar courses, because this is where my main expertise lies. Most of the advice below is likely useful for other audiences as well, but if you are doing supervised research, e.g., as part of a course, you should always confirm things with your tutors.


Avoid bias in questionnaire items

A common mistake in questionnaire items is that they might introduce bias in the findings. This means that they could influence participants to respond in a way that does not reflect their true opinion. This could happen unintentionally, but sometimes dishonest researchers also use bias to get the results they want.

An example of bias in questionnaire design

Here’s an example: Shortly before the 2012 national elections in Greece, there were reports in the press of a survey,1 which included questions such as the following:

If one of the major contending parties, which have been responsible for the decline of your standard of living, promises a better future, will you trust them and vote for them again?

It should be obvious that such questions encourage a certain answer, which is great if you are doing a survey to provide a veneer of statistical credibility to your pre-conceptions. However, if you want to be able to project findings from your sample to a population, then the questions you use must be as neutral as possible.

Other types of bias

Apart from using loaded language, as in the example above, some other kinds of bias-inducing items, which you should be on the lookout for, include:

  • Leading questions, such as “Would you be in favour of a new external evaluation procedure for teachers, as a means for countering widespread underperformance?” (from an internal survey conducted by the Greek Ministry of Education in October 2012)
  • Prestige questions, such as “Have you ever read about ‘language learning strategies’?” (found in a questionnaire addressed to language teachers). Such questions trigger ‘social desirability bias‘: this means that many respondents answer in ways that make them appear in a positive light, regardless of factual accuracy.

If an item is potentially problematic for any of the reasons above, you should either consider withdrawing it from the survey, or re-phrase it in such a way as to minimize bias.


Use closed-response items intelligently

Questionnaire surveys often use closed-response items, in which respondents have to choose one of the options provided by the researcher, because this makes coding and analysis easier. However, unless carefully constructed, such items run a risk of containing non-exhaustive or overlapping responses.

Avoid non-exhaustive lists

A non-exhaustive list of categories is one in which the options provided by the researcher do not cover all the potential responses, and therefore restricts the diversity of data that the survey generates.

Example of a non-exhaustive list of responses
What is wrong with this list of responses?

Non-exhaustive lists are problematic for at least two reasons. At a minimum, they result in a lack of analytical detail. More importantly, they might lead to misleading results. Here’s an example: among other data, schools in Greece ask students what languages they prefer to study as part of their Modern Foreign Languages provision. Typically, students have to choose between French and German (in addition to English, which is obligatory). However, in April 2013 my colleagues and I added an open field to the form (“Other language: please explain…”), and we found that French was actually a much less popular option than Spanish, Italian, Albanian and Chinese.2

Avoid category overlap

Another possible problem with poorly-designed closed response items is category overlap. The following is an example I have come across far too often:

What is your age:  20-30, 30-40, 40-50, 50-60, 60+

Faced with these options, respondents at the borders of the categories (e.g., 40-year-olds) would have to select two categories (in practice, I think most of us would opt for the lower one!).

Sometimes, category overlap can be less obvious. James Brown (2001: 48) tells a story about a questionnaire which asked educators what percentage of their time they spent in various activities, including ‘classroom teaching’ and ‘teacher training’. This created a lot of confusion, because –as it turned out– some respondents happened to be university lecturers involved in teacher education, and for them the two categories were identical in meaning.

How to avoid such problems

There are two strategies one might employ to avoid these types of problems.

  • One is to pilot the questionnaire extensively. Asking a friend or family member to go over it is a good start. However, it is rarely enough to spot all the potential problems, so make sure you find some additional respondents to help.
  • Secondly, unless your categories are logically exhaustive, you should always include an “other” option, where participants can record any unanticipated responses.


Avoid irrelevant questions

A hallmark of poorly-constructed questionnaires is that they tend to include large numbers of items that are not relevant to all individual respondents. Apart from wasting the respondents’ time, irrelevant questions can damage rapport and undermine one’s credibility as a researcher. This is not just embarrassing, but can also make participants reluctant to engage with the questionnaire.

In one extreme case, a questionnaire addressed to English language teachers in Greek public schools contained a full page of questions on the listening component of a newly introduced coursebook. In the questionnaire, teachers had to comment on the quality of the recordings, the relevance of the texts, the density, relevance and teachability of the vocabulary, and they were invited to provide suggestions for the improvement of the listening component. What the researcher did not know, however, was that the listening materials had never been produced due to funding cuts. Rather unfortunately, this made her the target of more than a few vitriolic remarks in the open-ended questionnaire responses.

Use branching

One way to avoid asking irrelevant questions is to use branching. This is a questionnaire design technique that directs respondents to the questions that are more relevant to them. Here’s an example:

5. Have you used the European Language Portfolio in class?   YES  NO
(if you have answered NO, please proceed directly to Question 9)

Branching instructions should be concise and clear. In my experience, it also helps to use formatting options (e.g., bold, larger fonts) to attract the respondents’ attention. It is also good practice to organise the questionnaire in a linear way, so as to avoid backtracking. Used judiciously, branching can significantly reduce questionnaire completion time, and it also creates the reassuring impression that the researcher has a good understanding of how complicated the topic can be.

Use parallel questionnaire versions

Another strategy is to use parallel versions of the questionnaire, which you can use if respondents have different profiles.

In a recent study, my colleagues and I wanted to find information about the after-school clubs organised at a certain school. To do this, we thought it would be useful to compare the perspectives of participating pupils and those of their parents, so we needed to address both groups of respondents. However, we were also aware that parents would not have direct experience of the actual activities in the club, and that pupils would be unable to comment on the quality of parent-school communication, both of which were important to our research. Because branching proved to be too complicated, we designed two questionnaires: one for parents and one for pupils. These had overlapping sections for comparison, as well as more focused sections specifically for each group of respondents.3


Some final words

If you arrived here while preparing for a student project, I wish you good luck with your work. You may also want to use the social sharing buttons at the end of the post to forward this content to other students who might find it useful. If you have any other questions that I might be able to answer, feel free to ask by posting a comment or using this form.


Notes

  1. There used to be a link here, which has since become defunct. So much for scripta manent… ↩︎
  2. The local school authorities, to whom I was accountable at the time, took due note of these findings, and we were strongly advised to use the standard, closed-response form in future surveys. ↩︎
  3. An earlier version of the post also included the following instruction: “When using parallel versions of a similar questionnaire, I have found it helpful to clearly identify the questionnaire version on every sheet of paper (notice the “M” on the top-right corner of this document), or to use coloured sheets of paper in order to readily identify versions.” As internet-facilitated surveying has become the norm now, this suggestion seems somewhat quaint. ↩︎

More to read

I hope that you found the advice in this post helpful. You might also want to take a look at the following posts on questionnaire design.

How to use Likert scales effectively

Many questionnaires use Likert items & scales to elicit information about language teaching and learning. In this post, I discuss how to use these instruments effectively, by looking into the difference between items and scales, and explaining how to analyse the data that they produce.

For more advice on writing effective questionnaire items, I also recommend consulting the following resources:

  • Brown, J. D. (2001). Using surveys in language programs. Cambridge University Press. (pp. 44-55)
  • Cohen, L., Manion, L. & Morrison, K. (2007). Research methods in education (6th edn.). Routledge. (pp. 334-336)
  • Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. Oxford University Press. (pp. 102-109)

About me

Achilleas Kostoulas is an applied linguist and language teacher educator at the University of Thessaly, Greece. His academic qualifications include holds a PhD and MA in TESOL from the University of Manchester, UK and a BA in English Studies from the University of Athens, Greece. He has extensive experience teaching research methods course and supervising student research projects. He has also published widely on research about TESOL and language education.

About this post

This post was originally written in January 2012. It has since been revised, most recently in February 2026 (copyediting). The content of the post does not represent the views of my present or past employers. The featured image is used with license from Adobe Stock.

Discover more from Achilleas Kostoulas

Subscribe now to keep reading and get access to the full archive.

Continue reading