Category Archives: Questionnaires Design

Designing better questionnaires: Questionnaire layout

In this, final, instalment to my five-post series on questionnaires, I’d like to share some tips on questionnaire production. I shall assume that you have already written out your questions, and I’ll therefore turn my attention to how you can put this content together in a coherent research instrument.

Incidentally, there is lots of excellent advice in the research methods literature on how to format a questionnaire (see, for instance Chapter 20 in Cohen, Manion and Morisson’s Research Methods in Education, or Dörnyei’s Questionnaires in Second Language Research).  So, in this post I shall try to avoid duplicating what is already commonplace; rather, I shall selectively focus on a few tips and tricks which I found particularly useful – some of which I had to learn the hard way.

Source: (c) Nick Piggott, CC BY-NC-ND


In order to answer your research questions, more data is always better. However, in my experience, any questionnaire that takes more than 30 minutes to answer will probably generate respondent fatigue, and should thus be avoided. Taking up more of the respondents’ time is also ethically questionable, unless respondents are invested in the topic, or you have provided a suitable incentive or compensation.

You can get an estimate of how many questions can be answered within the 30-minute timeframe by piloting the questionnaire. As a rule of thumb, 4-6 pages is a reasonable length for a questionnaire that mostly consists of closed-response factual questions. Of course, you will need to adjust this number, depending on the type of questions you ask, and factors such as the respondents’ age, reading comprehension skills etc.

If you need more data than can be accommodated in a 4-6 page questionnaire, it is a very bad idea to try to cram more items by using smaller fonts or margins. Such tricks will only make the questionnaire less reader-friendly, and respondents will likely be discouraged from completing it. Rather, you might have to consider administering a second survey to a different sample with similar demographic features.

Using booklets

Most questionnaires I’ve come across consist of several sheets of paper, stapled together at the top left corner, which is –I will concede – a fairly sensible design. Personally, I prefer creating A5-sized booklets, which look less intimidating, and offer a workable compromise between compactness and readability. You can easily produce such booklets by printing two pages side-by-side on each A4 sheet of paper, which you can then fold across the middle (Figure 1).

Questionnaire booklet
Figure 1. Example of a questionnaire booklet

I usually aim for an eight-page questionnaire (i.e., two sheets of A4, printed front and back). In addition to six pages of questions, which is my preferred length, this format allows for a front and back cover, which shield responses from prying eyes when the completed questionnaires are collected.

Your printer is likely to have a ‘print booklet’ setting, which you will find after clicking on Print>Properties and looking around. If not, you will have zoom manually (Figure 2), and to change the order in which your pages are printed, so that they appear correctly when you fold the printed sheets. For an eight-page booklet printed on two sheets of paper (front and back), the correct order is: 8 (back cover), 1 (front cover), 2, 7; 6, 3, 4, 5.

Screenshot showing how to select zooming
Figure 2. Zooming in manually

For a more elaborate twelve-page booklet, the correct printing order is: 12 (back cover), 1 (front cover), 2, 11; 10, 3, 4, 9; 8, 5, 6, 7. If you use this format, consider using pages 2 and 11 for recording consent and demographic information respectively. The first sheet of paper can then be detached from the substantive parts of the questionnaire, which you can process separately.


Moving to the actual content of the questionnaire, here are some more things to bear in mind:

Consent: it’s always a good idea to include a consent-affirming question in the questionnaire. While this may sound excessive (consent is implicit when one chooses to complete a questionnaire response), I believe that it helps to establish trust between respondents and the researcher.

Consent Affirming Question
Figure 3. Here’s an example of how to re-affirm consent.

Instructions:  Preface each questionnaire section with simple instructions. Many respondents are sophisticated enough to answer your questions on their own, but I am constantly surprised at how creatively some respondents re-imagine the question format. That said, try to keep instructions brief: long instructions take up valuable real estate in the questionnaire, and readers may be tempted to skip them if they look too complicated.

Numbering: It may seem intuitive to number your questionnaire items sequentially: DON’T. Rather, re-start numbering the items in every new section. This will help reinforce the impression that the questionnaire is not too long. However, you may find it helpful to have a master index for your own reference, where each questionnaire item is linked to your analytical codes.

Questionnaire index
Figure 4. Extract from a questionnaire index

Final production

My final set of tips is about the actual production of the questionnaire. When working on many different computers and word-processors, you may find that they process files in subtly different ways. This can mean that they will surreptitiously change the margins or fonts of your document, thus wrecking havoc with your carefully formatted questionnaire. Here are some tips that can help you maintain control over the format of your document.

Only format at the very end: During production, my documents are plain text, with occasional metacomments (e.g., ‘insert an image here’). I have found that by formatting the document after I have finalized the text, not only do I work more efficiently, but also the formatting is more consistent. Once I am happy with the overall look of the document, I save it as a .pdf file which prevents further modification.

Avoid uncommon fonts: I readily admit that I find fonts such as Arial and Times New Roman visually unappealing (and the less said about Comic Sans the better!), but there are two good reasons for them. First, many respondents might find unusual fonts , such as the ones shown in Figure 5, distracting or hard to read. Secondly, less common fonts might not be available at the computer from which you print your final document, and the replacement font could radically change the distribution of text across the document at the very last moment.

Questionnaire section with unusual fonts
Figure 5. Best avoid this!

If you find that unimaginative typography puts you off too much, you may want to embed the fonts you used in the document, but bear in mind that this will substantially increase the size of the file.


This post concludes the series of posts on questionnaire design. Previous posts, which you may find useful, include:

Designing Better Questionnaires: Using scales

This is the fourth in a series of five introductory blog posts on questionnaire design. In previous posts, I talked about question wording, possible bias, responses and sequencing of questions, and I discussed the demographics section of questionnaires. In this post, I will look into how Likert items, Likert scales, and forced choice can be used to elicit information from respondents.

"A volunteer fills out a questionnaire"
“A volunteer fills out a questionnaire” (c) Plings [CC-attribution license]

What are Likert items?

Likert items consist of a statement followed by a number of responses that bivalent and symmetrical. These are often anchored to numerical descriptors, in the form of consecutive integers. Here’s an example:

Example 1
Matt Smith was the best Doctor to date.
1=Strongly Agree, 2=Agree, 3=Not sure, 4=Disagree, 5=Strongly Disagree

In the example above, the scale extends towards ‘agreement’ and ‘disagreement’, i.e., it is bivalent, and responses are symmetrically arranged around a neutral value (‘not sure’). Note that I have used ‘item’, rather than ‘scale’, for reasons that will become clearer later on.

Some statisticians argue that responses should be ‘evenly’ spaced. This makes intuitive sense, and it is a necessary assumption for running a number of useful statistical tests. However, to do this  you would need to accept that attitudes can be measured with precision, that such precision can be linguistically mapped, and that all respondents will interpret the descriptors in a similar way. In my opinion, all this is just wishful thinking, so rather than worry about spacing responses evenly, I suggest that we treat the information these scales produce as ordinal data, i.e., adjust our research methods to reality, rather than vice versa.

Likert items work best in groups

Like most quantitative methods, Likert items can efficiently generate lots of data; on the other hand, they are very sensitive to the wording of the statements in the questionnaire. To illustrate by means of a classic example: there’s research going back to the 1940s proving that support for free speech in the US is much higher when the questions contain the word ‘forbid’ rather than ‘not allow’, even though the words are logical opposites. That is, respondents seem to be against ‘forbidding’ free speech, but are not as strongly opposed to ‘not allowing’ some forms of expression (a phenomenon known as the ‘forbid/allow asymmetry’). To moderate for the effect of item wording, it is best to use several variants of the same item in a questionnaire, and derive a composite score from the responses. Here’s one way to do this:

Example 2
4. I enjoy science fiction shows.
1=Strongly Agree, 2=Agree, 3=Disagree, 4=Strongly Disagree
12. I watch science fictions shows whenever I can.
1=Strongly Agree, 2=Agree, 3=Disagree, 4=Strongly Disagree
16. I am a science fiction fan.
1=Strongly Agree, 2=Agree, 3=Disagree, 4=Strongly Disagree
17. I dislike science fiction.
1=Strongly Agree, 2=Agree, 3=Disagree, 4=Strongly Disagree

A cluster of such related items, which probe the same underlying construct, produces a Likert scale. The items that make up a Likert scale, by the way, don’t need to be clustered together. In fact, it may be advantageous to spread them out across the questionnaire, so that their sequencing does not influence responses.

To derive the score of a Likert scale, you would need to (a) reverse any negative items, (b) remove from the scale any item that systematically generates different responses from the others, and (c) calculate the central tendency of the responses each participant provided. If you assumed that the options were equidistant, you might calculate the mean, but I strongly suggest using the median instead. Using the example above, let us assume that a participant responded with 1, 2, 1, 4. After reversing item 17, which is negatively worded, the median of the responses is 1.

Using forced choice

Most commonly, Likert items contain five (or seven) options which are arranged around a neutral response such as ‘neither agree or disagree’. This beautifully symmetric format can give rise to the ‘central tendency bias’, which is what happens when participants systematically select the uncontroversial middle option. Whether this is due to respondent fatigue, or constitutes a deliberate strategy to avoid expressing an opinion, such responses give very few usable insights, so we need to discourage them.

One of the simplest ways to counteract the central tendency bias is to use scales with an even number of responses. In Example 2, above, I used forced-choice (or ipsative) items: these are items from which the ‘neutral’ option has been removed, thus forcing participants either agree or disagree with the statement. Forced-choice items with a small number of responses are very effective in eliciting attitudes that participants might otherwise feel inclined to suppress.

Further reading

In this blog, I have written extensively about Likert scaling. Some relevant posts are:

Other online resources which you may wish to consult are listed below:


In the next, final, post to this series, I will discuss ways to make your questionnaire layout more effective. Till then!

Designing better questionnaires: Demographic data

Continuing the series of posts on how to write more effective questionnaires (previous posts: 1, 2 ), I would now like us to tackle the often-problematic ‘demographic data’ section.  The demographics section is where respondents are asked to provide details about their age, the make-up of their family, their parents’ jobs and income, and so on.

What struck me, when I was vetting such questionnaires, was that although most student researchers could speak intelligently about all the items in the surveys they had designed, they seemed unable to provide a rationale for the demographics section. All too often, I was told that this section was there ‘because all questionnaires must have it’. So in this post, I want to go into the purposes that the demographics section serves and how to best use it.

"A volunteer fills out a questionnaire"
“A volunteer fills out a questionnaire” (c) Plings [CC-attribution license]

What is the demographics section for?

There are two primary reasons why a researcher might want to collect demographic questions in a questionnaire survey: (a) because demographic data help to answer the research questions, and (b) because they help to describe the sample.

In the first case, there is a clear way in which the information about the participants informs the findings: for example, a researcher might be interested in finding out how family income or parental educational attainment impacts learning; or in finding the relation between teacher qualifications and teaching style. When such relations between the data collected and the expected findings are explicit, that’s great. What is less great is a tendency to collect any data that might conceivably be relevant, in the hope that an effect might show up in analysis: such an approach is very often a waste of respondents’ and the researchers’ time.

In the second case, the use of demographic data is ancillary: researchers collect and report data about their sample, so that readers might be able to account for similarities and differences across studies. In such cases, where researchers are interested in a summary description rather than individual responses, it makes sense to check whether such data is already available. For instance, it might not be necessary or efficient to ask every participant about their family income, if you already know that the school’s catchment area is a middle-class neighbourhood.

Such data are surprisingly easy to find: In our school, for example, summary information about the demographics of the students (age groups, family size, parental education, and family income) was available to researchers upon request. Useful information can also be found in census reports, commercial databases such as ACORN, or, perhaps, from the local education authorities. Published research which explicitly describes the demographics of geographical areas may also be found in the literature. In addition to saving time, using information from such sources ensures that the data are easier to compare across studies. So, in short, before embarking on data collection, just ask!

What’s wrong with collecting demographic data from scratch?

Including a demographics section in a questionnaire survey is associated with three potential problems: It risks alienating the respondents, it generates respondent fatigue, and it creates possible liabilities.

When personal questions are included in a survey, especially at the beginning of a questionnaire, they risk unduly alienating or alarming respondents. Even when the usual reassurances about confidentiality and anonymity are provided, some respondents may be reluctant to share information that they consider sensitive, or information through which they feel that they might be identified. In my experience, information about family income is considered sensitive by many Greeks, and students often avoid answering them. Other respondents may be uncomfortable sharing information about their family status, religious affiliation, languages spoken at home, etc. Asking such questions has a way of creating distrust, and should be handled tactfully.

Secondly, it seems like a bad idea to waste the respondents’ time, energy and good will, by making them fill out long forms with information that may not be strictly necessary. Long demographics sections cause respondent fatigue, which means that respondents might either quit the questionnaire before it is completed, or engage with the last sections in a very superficial way (e.g., by selecting the same answer in all items).

Finally, demographics sections risk making respondents more self-aware, or even identifying them. This is especially true in small-scale surveys, such as the ones typical of student research. To offer a personal example, I was the only male MFL specialist in the last school where I was employed, so when handed with questionnaires to complete, awareness that I was identifiable influenced which questions I answered and how I answered then. In addition, it has been my experience that many student researchers seem blithely unaware of the responsibilities involved in collecting personal and sensitive data, and they usually lack the experience and resources to comply with legal requirements for processing them.

How can a demographics section be improved upon?

From the paragraphs above, it should be clear that you had best avoid collecting information that you don’t strictly need. When it comes to collecting information that is indeed necessary, here are some tips:

  • Avoid embarrassing the respondents: For example, some respondents may feel uncomfortable placing themselves in the highest age-group, or the lowest educational attainment group. You can easily avoid such problems by adding more possible responses to your questions: For instance, I often recommended that the questions asking the respondents’ age included options ‘51-60’, ‘over 60’, rather than just ‘over 50’, so as to avoid putting respondents on the spot.
  • Allow respondents to opt out: Respondents should always be given the option of not answering any or all the answers in the demographics section. At minimum, a ‘prefer not to say’ option should be included in every item. You may also want to include a statement reaffirming consent at the top of the demographics section. Here’s a possible format: “This section asks questions about you. This information is necessary [insert brief justification]. The data you share with us will not be used to personally identify you, and will not be passed on to anyone else. If you prefer not to answer these questions, tick the following box …”
  • Place the demographics section at the end of the questionnaire. This will help to minimize the effects of respondent fatigue (see above).
  • If you only need the demographic information for descriptive purposes only, i.e., if you do not plan to analyse it in conjunction with other questions in the survey, consider placing the demographics section on a separate page. This can then detached from the rest of the questionnaire and analysed independently from the other data as an additional anonymity safeguard.

In summary, demographic sections in questionnaires should be designed on a strict ‘need-to-know’ basis; alternative sources of data must be considered before personal or sensitive data are collected; and their format and sequencing needs to be such that it does not impact other sections of the questionnaire.


This is the third in a series of five blog posts on designing more efficient questionnaire surveys. Previous posts looked into the wording, and bias, structure and sequencing of questionnaire items; in the posts that follow I will describe how scale items can be used to elicit information and give some tips on overall questionnaire layout. Till next time!

Designing better questionnaires: Writing more effective questions (2)

In my previous blog post about questionnaire construction, I wrote about the pitfalls of using long and complicated items, double-barrelled questions and negatively phrased items. In this second instalment to the series, I shall discuss three more ways to make questionnaire items more effective.

Image by Nick Piggott, found at, shared under a CC Attribution-NonCommercial-NoDerivs 2.0 Generic license
(c) Nick Piggott, CC Attribution-NonCommercial-NoDerivs license

Avoid bias

Shortly before the 2012 national elections in Greece, there were reports in the press of a survey (carried out by a senior university professor, no less), which included questions such as the following:

If one of the major contending parties, which have been responsible for the decline of your standard of living, promises a better future, will you trust them and vote for them again?

It should be obvious that such questions privilege a certain answer, which is great if you are doing a survey to provide a veneer of statistical credibility to your pre-conceptions. However, if you want to be able to project findings from your sample to a population, then the questions you use must be as neutral as possible.

Apart from using loaded language, as in the example above, some other kinds of bias-inducing items, which you should be on the lookout for, include:

  • Leading questions, such as  “Would you be in favour of a new external evaluation procedure for teachers, as a means for countering widespread underperformance?” (from an internal survey conducted by the Greek Ministry of Education in October 2012)
  • Prestige questions, such as “Have you ever read about ‘language learning strategies’?” (found in a questionnaire addressed to language teachers). Such questions trigger ‘social desirability bias’: this means that many respondents answer in ways that make them appear in a positive light, regardless of factual accuracy.

If an item is potentially problematic for any of the reasons above, you should either consider withdrawing it from the survey, or re-phrase it in such a way as to minimize bias.

Use closed-response items intelligently

Closed-response items, in which respondents have to choose one of the options provided by the researcher, are often used in questionnaire surveys, on account of the fact that they facilitate coding and analysis. However, unless carefully constructed, such items run a risk of containing non-exhaustive or overlapping responses.

A non-exhaustive list of categories is one in which the options provided by the researcher do not cover all the potential responses, and therefore restricts the diversity of data that the survey generates. Non-exhaustive lists are problematic, not just because of they result in a of analytical detail, but also because they can lead to misleading results. Here’s an example: among other data, schools in Greece ask students what languages they prefer to study as part of their Modern Foreign Languages provision. Typically, students are asked to choose between French and German (in addition to English, which is taught by default). These data are used in curriculum planning, and courses in French and German are created on the strength of this information. However, when in April 2013 my colleagues and I added an open field to the form (“Other language: please explain…”), we found that French ranked in actual popularity behind Spanish, Italian, Albanian and Chinese – the implication being that the French language courses offered in our schools may not be a good match with many learners’ needs. For the record, we were strongly encouraged to use the standard, closed-response form in future surveys.

Another possible problem with poorly-designed closed response items is category overlap. The following is an example I have come across far too often:

What is your age (circle):  20-30,  30-40,   40-50,  50-60,  60+

Respondents at the borders of the categories (e.g., 40-year-olds) would have to select two categories (In practice, I think most of us would opt for the lower one!). Sometimes, category overlap is less obvious: James Brown reports on a questionnaire where educators were asked to indicate what percentage of their time they spent in various activities, including ‘classroom teaching’ and ‘teacher training’. As it turned out, at least some of the respondents happened to be university lecturers involved in teacher education, for whom the two categories were identical in meaning, and this seems to have caused some confusion (2001, p. 48).

There are two strategies one might employ to avoid these types of problems: One is to pilot the questionnaire extensively (asking a friend or family member to go over it is a good start, but rarely enough to spot all the potential problems). Secondly, unless your categories are logically exhaustive, you should always include an “other” option, where unanticipated responses can be recorded.

Avoid irrelevant questions

A hallmark of poorly-constructed questionnaires is that they tend to include large numbers of items that are not relevant to all individual respondents. In one extreme case, a questionnaire addressed to English Language teachers in Greek public schools contained a full page of questions on the listening component of a newly introduced coursebook: the teachers were asked to comment on the quality of the recordings, the relevance of the texts, the density, relevance and teachability of the vocabulary, and they were invited to provide suggestions for the improvement of the listening component. What the researcher did not know, however, was that the listening materials had never been produced due to funding cuts, and she rather unfortunately became the target of more than a few vitriolic remarks, in which respondents vented their frustration against the policy planners. In addition to wasting the respondents’ time, irrelevant questions can damage rapport and undermine one’s credibility as a researcher.

One way to avoid asking irrelevant questions is to use branching, which directs respondents to the questions that are more relevant to them. Here’s an example:

5. Have you used the European Language Portfolio in class?   YES  NO
(if you have answered NO, please proceed directly to Question 9)

Branching instructions should be concise and clear, and -in my experience- it helps to use formatting options such as bolding, larger fonts and the like to attract the attention of respondents. It is also good practice to organise the questionnaire in a linear way, so as to avoid backtracking. Used judiciously, branching can significantly reduce questionnaire completion time, and it also creates the reassuring impression that the researcher has a good understanding of how complicated the topic under research can be..

Another strategy is to use parallel versions of the questionnaire, which are directed to different respondent profiles. In a recent study, my colleagues and I were interested in finding information about the after-school clubs organised at a certain school. We felt that it would be useful to compare the perspectives of participating pupils and those of their parents, so we needed to address both groups of respondents. However, we were also aware that parents would not have direct experience of the actual activities in the club, and that pupils would be unable to comment on the quality of parent-school communication, both of which were important to our research. Rather than use branching, which we felt would be too complicated, we designed two questionnaires (one addressed to parents and one to pupils), which had overlapping sections for comparison, as well as more focused sections specifically addressed to each group of respondents. When using parallel versions of a similar questionnaire, I have found it helpful to clearly identify the questionnaire version on every sheet of paper (notice the “M” on the top-right corner of this document), or to use coloured sheets of paper in order to readily identify versions.


This post concludes my comments on item construction. For more advice on writing effective questionnaire items, I recommend consulting the following resources:

  • Brown, J. D. (2001). Using surveys in language programs. Cambridge: Cambridge University Press. (pp. 44-55)
  • Cohen, L., Manion, L. & Morrison, K. (2007) Research methods in education (6th edn.). New York: Routledge. (pp. 334-336)
  • Dörnyei, Z. (2007). Research methods in applied linguistics : quantitative, qualitative, and mixed methodologies. Oxford: Oxford University Press. (pp. 102-109)

The next post in this series will focus on problems and solutions regarding the demographics section of questionnaires. In subsequent posts, I shall discuss how to use scales effectively and how to make best use of questionnaire layout. Till next time!

Designing better questionnaires: Writing more effective questions (1)

In my previous job assignment in an ‘experimental’ school affiliated to a university, part of my job involved liaising with university students who wanted to conduct research at the school, and vetting their research protocols. As one might expect from students who were keen but inexperienced, many of the research instruments they used were somewhat inexpertly designed. Even so, in most cases major improvements were possible with only minor changes.

This is the first in a series of five blog posts in which I want to share a few pointers that may be of help in avoiding some of the most common mistakes I came across. In writing this post, and the ones that will follow, I shall make some assumptions about you: I assume that you are a trainee researcher, such as an undergraduate working on an assignment, or maybe a professional in a field where small-scale research can be used to inform practice (e.g., a teacher doing an action research project). I also assume that you are working in the fields of education or linguistics, because this is where my expertise lies, although you may find that much of what I have to say can be applied equally well elsewhere. If the above applies to you, shall we read on?

Fragebogen zur Wikipedia - YOU Berlin 2008 (6566)

We shall begin by looking into three tips that can make questions (or, to use a slightly more technical term, questionnaire items) more effective.

Keep the questions simple!

It is important that questionnaire items are kept short and straightforward: By making sure that questions can be read and answered easily, you may include more items in the questionnaire, and respondents are less likely to give up before completing the survey. Zoltán Dörnyei helpfully suggests 20 words as a maximum (2007: 108) , but you may need to adjust this rule of thumb depending on your respondents’ reading skills and the language in which the questionnaire is written.

It is often easy to lose sight of this rule: Sometimes, we write too long sentences when we’re trying too hard to become clear; or maybe we might use academic or technical language to establish our credibility as researchers. I have been guilty of this myself: in a draft questionnaire addressed to 9-12 year old students, I included questions such as:

  • The number of students in your class is so large that it disrupts learning activities (i.e., it is difficult for the teacher to carry out the lesson, and it prevents you from focusing on your tasks).      YES    NO

On review, the item was shortened to “There are too many children in your class”.

In addition to long, complex structures, you should try to avoid jargon, acronyms and abbreviations, and other technical terms with which readers will be unfamiliar. You should also bear in mind that your respondents’ reading skills may be less sophisticated than yours, as was the case with the draft questionnaire I mentioned above.

Avoid double-barrelled questions

A double-barrelled, or dichotomous, question is an item which actually contains more than one questions. The following question, from a survey an MA student designed, is a good example of what I mean:

  • In science lessons, learners should spend more time on experiments and project work than on memorizing theory.
    Strongly Agree, Agree, No Opinion, Disagree, Strongly Disagree

By parsing this item, we find that it consists of at least two propositions: (a) learners should do more experiments and (b) learners should do more project work (a third, implicit, proposition is “learners should spend less time memorising theory”). Double-barrelled questions are difficult to answer, and the responses are even harder to interpret. In the example above, ‘disagree’ could mean that the respondent believes that memorising rules is effective, or that they prefer experiments but not projects, or maybe they’d rather do more projects but not experiments. Essentially, there is no way of using this data. To avoid double-barrelled questions, you should scrutinise your questionnaire and check every instance where words such ‘and’ or ‘or’ appear – these are good indicators of potential problems.

Questionnaire extract
How might you re-write these questions?

Avoid negative constructions

Negatively phrased questionnaire items are problematic for two reasons: First, many respondents fail to notice the negative words; secondly, expressing disagreement with a negative proposition is too complicated. Consider how confusing the following questionnaire item is:

  • Test scores are not the only criterion you use for student evaluation.   YES    NO

Note that a negative meaning can be expressed syntactically (as was the case above) and semantically (e.g., “Teaching pronunciation is uninteresting”). You will want to go over your questionnaire carefully and check whether negative statements can be re-phrased in a more straightforward way: “Students do not make noise”, for instance, can be changed to “Students are quiet”. If there is no way to avoid a negative construction, you will likely need to format the negative words with bold or underlining to make them more visible.


In later posts, I shall discuss some more tips about writing questionnaire items; I’ll look into ways to improve the demographics section of questionnaires; I’ll explain how to use scale items effectively; and I shall talk about questionnaire layout. Till next time!