Image by Nick Piggott, found at http://www.flickr.com/photos/nickpiggott/2508490437/, shared under a CC Attribution-NonCommercial-NoDerivs 2.0 Generic license

Designing better questionnaires: Writing more effective questions (2)

In my previous blog post about questionnaire construction, I wrote about the pitfalls of using long and complicated items, double-barrelled questions and negatively phrased items. In this second instalment to the series, I shall discuss three more ways to make questionnaire items more effective.

Image by Nick Piggott, found at http://www.flickr.com/photos/nickpiggott/2508490437/, shared under a CC Attribution-NonCommercial-NoDerivs 2.0 Generic license
(c) Nick Piggott, CC Attribution-NonCommercial-NoDerivs license
Source: http://www.flickr.com/photos/nickpiggott/2508490437/

Avoid bias

Shortly before the 2012 national elections in Greece, there were reports in the press of a survey (carried out by a senior university professor, no less), which included questions such as the following:

If one of the major contending parties, which have been responsible for the decline of your standard of living, promises a better future, will you trust them and vote for them again?

It should be obvious that such questions privilege a certain answer, which is great if you are doing a survey to provide a veneer of statistical credibility to your pre-conceptions. However, if you want to be able to project findings from your sample to a population, then the questions you use must be as neutral as possible.

Apart from using loaded language, as in the example above, some other kinds of bias-inducing items, which you should be on the lookout for, include:

  • Leading questions, such as  “Would you be in favour of a new external evaluation procedure for teachers, as a means for countering widespread underperformance?” (from an internal survey conducted by the Greek Ministry of Education in October 2012)
  • Prestige questions, such as “Have you ever read about ‘language learning strategies’?” (found in a questionnaire addressed to language teachers). Such questions trigger ‘social desirability bias’: this means that many respondents answer in ways that make them appear in a positive light, regardless of factual accuracy.

If an item is potentially problematic for any of the reasons above, you should either consider withdrawing it from the survey, or re-phrase it in such a way as to minimize bias.

Use closed-response items intelligently

Closed-response items, in which respondents have to choose one of the options provided by the researcher, are often used in questionnaire surveys, on account of the fact that they facilitate coding and analysis. However, unless carefully constructed, such items run a risk of containing non-exhaustive or overlapping responses.

A non-exhaustive list of categories is one in which the options provided by the researcher do not cover all the potential responses, and therefore restricts the diversity of data that the survey generates. Non-exhaustive lists are problematic, not just because of they result in a of analytical detail, but also because they can lead to misleading results. Here’s an example: among other data, schools in Greece ask students what languages they prefer to study as part of their Modern Foreign Languages provision. Typically, students are asked to choose between French and German (in addition to English, which is taught by default). These data are used in curriculum planning, and courses in French and German are created on the strength of this information. However, when in April 2013 my colleagues and I added an open field to the form (“Other language: please explain…”), we found that French ranked in actual popularity behind Spanish, Italian, Albanian and Chinese – the implication being that the French language courses offered in our schools may not be a good match with many learners’ needs. For the record, we were strongly encouraged to use the standard, closed-response form in future surveys.

Another possible problem with poorly-designed closed response items is category overlap. The following is an example I have come across far too often:

What is your age (circle):  20-30,  30-40,   40-50,  50-60,  60+

Respondents at the borders of the categories (e.g., 40-year-olds) would have to select two categories (In practice, I think most of us would opt for the lower one!). Sometimes, category overlap is less obvious: James Brown reports on a questionnaire where educators were asked to indicate what percentage of their time they spent in various activities, including ‘classroom teaching’ and ‘teacher training’. As it turned out, at least some of the respondents happened to be university lecturers involved in teacher education, for whom the two categories were identical in meaning, and this seems to have caused some confusion (2001, p. 48).

There are two strategies one might employ to avoid these types of problems: One is to pilot the questionnaire extensively (asking a friend or family member to go over it is a good start, but rarely enough to spot all the potential problems). Secondly, unless your categories are logically exhaustive, you should always include an “other” option, where unanticipated responses can be recorded.

Avoid irrelevant questions

A hallmark of poorly-constructed questionnaires is that they tend to include large numbers of items that are not relevant to all individual respondents. In one extreme case, a questionnaire addressed to English Language teachers in Greek public schools contained a full page of questions on the listening component of a newly introduced coursebook: the teachers were asked to comment on the quality of the recordings, the relevance of the texts, the density, relevance and teachability of the vocabulary, and they were invited to provide suggestions for the improvement of the listening component. What the researcher did not know, however, was that the listening materials had never been produced due to funding cuts, and she rather unfortunately became the target of more than a few vitriolic remarks, in which respondents vented their frustration against the policy planners. In addition to wasting the respondents’ time, irrelevant questions can damage rapport and undermine one’s credibility as a researcher.

One way to avoid asking irrelevant questions is to use branching, which directs respondents to the questions that are more relevant to them. Here’s an example:

5. Have you used the European Language Portfolio in class?   YES  NO
(if you have answered NO, please proceed directly to Question 9)

Branching instructions should be concise and clear, and -in my experience- it helps to use formatting options such as bolding, larger fonts and the like to attract the attention of respondents. It is also good practice to organise the questionnaire in a linear way, so as to avoid backtracking. Used judiciously, branching can significantly reduce questionnaire completion time, and it also creates the reassuring impression that the researcher has a good understanding of how complicated the topic under research can be..

Another strategy is to use parallel versions of the questionnaire, which are directed to different respondent profiles. In a recent study, my colleagues and I were interested in finding information about the after-school clubs organised at a certain school. We felt that it would be useful to compare the perspectives of participating pupils and those of their parents, so we needed to address both groups of respondents. However, we were also aware that parents would not have direct experience of the actual activities in the club, and that pupils would be unable to comment on the quality of parent-school communication, both of which were important to our research. Rather than use branching, which we felt would be too complicated, we designed two questionnaires (one addressed to parents and one to pupils), which had overlapping sections for comparison, as well as more focused sections specifically addressed to each group of respondents. When using parallel versions of a similar questionnaire, I have found it helpful to clearly identify the questionnaire version on every sheet of paper (notice the “M” on the top-right corner of this document), or to use coloured sheets of paper in order to readily identify versions.

~

This post concludes my comments on item construction. For more advice on writing effective questionnaire items, I recommend consulting the following resources:

  • Brown, J. D. (2001). Using surveys in language programs. Cambridge: Cambridge University Press. (pp. 44-55)
  • Cohen, L., Manion, L. & Morrison, K. (2007) Research methods in education (6th edn.). New York: Routledge. (pp. 334-336)
  • Dörnyei, Z. (2007). Research methods in applied linguistics : quantitative, qualitative, and mixed methodologies. Oxford: Oxford University Press. (pp. 102-109)

The next post in this series will focus on problems and solutions regarding the demographics section of questionnaires. In subsequent posts, I shall discuss how to use scales effectively and how to make best use of questionnaire layout. Till next time!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s