I have found it moderately bemusing that my post on Likert scales has received so much attention, considering that I had only intended to mention statistics in passing, in order to make what I perceived to be a more broadly relevant remark.

Because my authorial intention was not to educate readers in statistics, the views I expressed in that post do not reflect the full extent on scholarship on Likert scaling, and have been stated in more absolute terms than what I would have otherwise used.

I do stand by my basic premise: a ranking scale consisting of five verbal descriptors (e.g., strongly disagree, disagree, undecided, agree, strongly agree) can only produce ordinal data. The same applies when the descriptors are replaced by numerical codes, or any other kind of shorthand. But what about composite scales, which synthesise many such items? And what about ranking scales which are only anchored on the two ends of the continuum? There are many instruments in the research methods toolkit, and they all have slightly different uses.

I am grateful to Florentina Taylor who took the time to provide a somewhat different perspective, and has kindly permitted me to quote her in this post. Here’s what she had to say:

Hello, Achilleas!

Thank you for a useful and informative post (which I’ve just discovered) – though I disagree with your strong view of Likert scales always eliciting ordinal data.

*The* problem is assuming that the interval between two adjacent response options is always the same. This doesn’t make sense when labelling all the options, as this clearly makes the data ordinal (or nominal). However, if only the first and the last response options are labelled and the respondent is asked for the strength of their reported opinions/ feelings (e.g., on a scale of 1-6, where 1=very bad and 6=very good), then the intervals can be assumed to be equal.

I am not hoping to persuade you – I just think it is fair that this alternative point of view is added to this discussion.

Copy-pasting below something I wrote about this a while ago, with some references:

‘There has been some controversy regarding the nature of the data produced by self-reported scales, these being considered a grey area between ordinal and continuous variables (Field, 2009; Kinnear & Gray, 2008). Although attitudes and feelings cannot be measured with the same precision of pure scientific variables, it is generally accepted in the social sciences that self-reported data can be regarded as continuous (interval) and used in parametric statistics (Agresti & Finlay, 1997; Pallant, 2007; Sharma, 1996). […] Blunch (2008, p. 83) maintains that treating self-reported scales as interval/ continuous variables is most realistic if the scales have at least 5 possible values and the variable distribution is “nearly normal”.’

- Agresti, A., & Finlay, B. (1997).
Statistical methods for the social sciences (3rd ed.). Upper Saddle River, NJ: Pearson Education.- Blunch, N. J. (2008).
Introduction to Structural Equation Modelling Using SPSS and AmosLondon: SAGE.- Field, A. (2009).
Discovering Statistics Using SPSS, 3rd Edition (Introducing Statistical Methods). London: SAGE.Kinnear, P. R., & Gray, C. D. (2008).SPSS 15 made simple.Hove: Psychology Press.- Pallant, J. (2007).
SPSS survival manual: A step by step guide to data analysis using SPSS for Windows(3rd ed.). Maidenhead: Open University Press.- Sharma, S. (1996).
Applied multivariate techniques.New York: Wiley.

Image Credit: Michael Kwan @ Flickr [CC BY-NC-ND]

That it is generally accepted in social sciences does not make it correct; that argument suffers from what I call the “Brooklyn Bridge” fallacy – where, just because everyone else does something, that doesn’t mean that something is correct (as in, if everyone else jumped off the Brooklyn Bridge, that doesn’t mean it’s a good idea). I would also suggest that the idea of a normal distribution on a 5-option item seems rather silly, for lack of a better term.

Hi! Thanks for adding your thoughts to this discussion!