Every now and then I tend to get questions about statistics from readers of this blog — this is due to a somewhat ill-deserved reputation Google seems to have bestowed on me as an ‘expert’ in Likert scale measurement. Many of the answers you need can be found in this post, and this set of slides, but I am also happy to answer other questions, such as the one below.

## How to analyse Likert scale data

The following (slightly modified) question was posted as a comment here, but I felt that the answer was too lengthy for the comments section.

Our questionnaire is composed of items with a 5 point scale, ranging from “1=strongly disagree” to “5=strongly agree”. For example, we are trying to find out if the respondents agree with [a topic]. The number of respondents who ‘strongly disagree’ are 2, those who ‘disagree’ are 9, those who ‘are undecided’ are 24, those who ‘agree’ are 18 and those who ‘strongly agree’ are 7. How do I interpret this data?

There are two types of statistical analysis, descriptive and inferential statistics. If you want to find out what respondents believe about a topic, you need to do descriptive statistics. This involves, for example, finding the central tendency (what most respondents believe) and the spread / dispersion of the responses (how strongly respondents agree with each other).

Because Likert scales produce what are called ordinal data, I suggest that you calculate the **median** and **Inter-Quartile Range** (IQR) of each item. The median (: the number found exactly in the middle of the distribution) is a measure of central tendency: very roughly speaking, it shows what the ‘average’ respondent might think, or the ‘likeliest’ response. The IQR is a measure of spread: it shows whether the responses are clustered together or scattered across the range of possible responses.

You can find some instructions on how to do calculate these metrics with SPSS in this page (the procedure is the same for both). If you only have access to Excel, here are links to a couple of videos demonstrating how to calculate the median and the IQR. For small datasets, such as the one that propmted this question, it is easy to calculate the median and IQR manually. In the next two sections, I shall show how this can be done, using the example data. If you don’t want to read these, you can skip to the bottom, for some advice about how to report the findings.

### Calculating the median

First, you arrange the numbers in an order from largest to smallest, like this:

1,1,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, 3,3, 3,3,3,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5

To compute the median, you then delete one number from each end of the line, and repeat until you are left with just one number (or two that are the same). This ‘middle’ number is your median. If you are left with two *different* numbers in the end, the median is half-way between them. This will produce a decimal (e.g., 2.5), which might seem odd, but that’s ok. Using the data you provided, the median is 3, and I have marked it with red to make it stand out.

### Calculating the IQR

The IQR is slightly more complicated, but not too hard. Your starting point will be the same arrangement of responses that we used above. When you divide this line into four equal parts, the ‘cut-off’ points are called **quartiles**. I have used red to indicate quartiles in the dataset.

[1,1,2,2,2,2,2,2,2,2,2,3,3,3, 3] [3,3,3,3,3,3,3,3,3,3,3,3,3,3, 3][3,3,3,3,3,4,4,4,4,4,4,4,4,4, 4] [4,4,4,4,4,4,4,4,5,5,5,5,5,5, 5]

The IQR is the difference between the first and third quartile. In the example, this is: Q3 – Q1 = 4 – 3 = 1.

A relatively small IQR, as was the case above, is an indication of consensus. By contrast, larger IQRs might suggest that opinion is polarised, i.e., that respondents tend to hold strong opinions either for or against this topic.

Embed from Getty Images### Reporting the findings

When your findings suggest consensus, your write-up should focus on describing the median (i.e., what most respondents seem to believe). One way to describe this is by writing something like: *“most respondents indicated agreement with the idea that… (Mdn=4, IQR=0)”*.

By contrast, when opinion is polarised, your write-up should emphasise the dissonance of opinion: the median is perhaps not so important. To help you understand this, consider a hypothetical case where half of your respondents hate a new textbook, and half love it. If you were to simply report that the respondents are, on average, undecided, that would be a statistical distortion of the data. Here’s a possible way to report the data more accurately: “*Opinion seems to be divided with regard to… . Many respondents (N=28, 47%) expressed strong disagreement or disagreement, but a roughly equal number (N=26, 43%) indicated that they agreed or strongly agreed (Mdn=3, IQR=3).*“

### A final caveat

One last thing: I would caution you against placing too much faith on findings that were generated from a single Likert-type item. If at all possible, I’d try to cluster similar items together and compare / merge their results. If the findings are broadly consistent, that gives us confidence in them. If they are not, it might mean that one of the items did not function properly (e.g., respondents may have been confused by the wording), and you may have to discard it from the dataset.

### More to read

*I hope that this information was helpful, but if there’s anything that was not clear, feel free to drop a line in the comments below. You may also want to check out some more posts I have written on quantitative research, including:*

- On Likert scales, levels of measurement and nuanced understandings
- Designing better questionnaires: Using scales
- Four things you probably didn’t know about Likert scales

*If you arrived at this page while preparing for one of your student projects, I wish you all the best with your work. There’s a range of social sharing buttons below, in case you feel like sharing this information among fellow students who might also find it useful. Also feel free to ask any other questions you may have, using the contact form.*

Hi!

I’m just wondering how large is “A large IQR would suggest that opinion is polarised”?

Thanks!

I am reluctant to give you a definite “cut off” point, as that depends a lot on your data. For instance, scales with four possible responses behave very differently from one with five or nine. The way the items are weighted also makes a difference. For a scale like the one in the example, I’d tentatively say that anything equal or larger than 2 warrants a close look.

A quick and easy way to make such a judgement is to use a visual display like a bar chart. If the data looks like a U, i.e., with many responses towards the extremes, and few in the middle, this is an indication of polarisation. I realise, incidentally, that my original wording may have been somewhat misleading. I think that “polarisation” is best vied in terms of degree rather than in absolute ones. That is to say, your data might indicate that opinion is “somewhat” or “extremely polarised”. Reporting the exact IQR helps to show this degree.

Hi Mr. Kostoulas.

For the following sample question, would you suggest Median or Chi square as tool for statistical analysis?

How cost-effective is social recruiting to your organization?

Extremely cost effective

Somewhat Cost Effective

Neutral

Costly

Extremely Costly

Number of respondents is 55(10+15+23+6+1).

Kindly help me.

The procedures you suggest are very different and lend themselves to answering different questions. Which procedure you use will depend in what you want to find out (your research question). Chances are, you may have to do both.

Dear Kostoulas,

I am trying to find factors which are most effective, least effective and not effective for a particular event to happen from the responses gathered through questionnaire based on Likert scale ordinal data where questionnaire is composed of items with a 5 point scale, ranging from “1=strongly disagree” to “5=strongly agree”. The factors are independent of each other so there is no relationship between the factors.

Please could you advise how to interpret whether the factor is more effective than other.

Thanks in anticipation.

Regards,

Somesh

I’m afraid it’s difficult for me to understand what you are trying to do without context. Someone more familiar with your research questions and data would probably be more helpful in advising you.

Research is find the effectiveness of factors which help in knowledge transfers. Where factors are independent of each other. Each factor has likert scale ordinal data question (5 point scale, ranging from “1=strongly disagree” to “5=strongly agree”)

Please could you advise how to interpret the response to rate the factors

Dear Achilleas,

How would you go with Likert (Ordinal!) style questions which belong to one construct and the comparison of medians across groups. For instance, imagine I measure some construct by means of multiple questions, and I want to compare respondent’s answers to constructs of two different kind of groups.

For instance, let’s say I am measuring the construct ‘happiness’ with the recent decisions made by a political party, and I am measuring happiness by means of 10 questions each.

I interview 20 respondents in total, 10 of which appear to like party A and 10 of which like party B. I want to provide an answer to the question whether followers of party A are more happy with their political party’s decisions than the followers of party B are.In other words, I want to know whether there exists a difference between the construct happiness among respondents of two different political parties.

In total I get [20 respondents]*[10 questions per respondent[ = 200 answers.

How would you analyse this dataset in terms of medians?

Would it be fine to calculate the median of all the respondent’s answers related to one construct in one political party’s followers sample, say I calculate the median of 10*10=100 values belonging to the construct ‘happiness’ of party A followers (or B, whatever you choose)?

Hi George,

Let me see if I understand your question: You are saying that you have set of ten questions (I’ll call these variables from now on), and a sample of twenty participants, who are divided into two groups (supporters of different political parties). You have a hunch that the responses given by your participants are different depending on which group they belong to, and you want to test this statistically. Am I getting this right?

One way to do this would be to conduct a cross-tabulation and x-squared test for each variable. This will show you how participants from each group responded, compare the results against what might be expected if the responses were random, and tell you if the difference is statistically significant. However, the test might be skewed due to your small sample size (If you are using SPSS, it will flag possibly skewed results).

Another way would be to merge the ten variables into one super-variable (‘happiness’), and calculate each participant’s ‘average’ response. This can only work if the ten questions elicit similar responses: e.g., participants who responded with a ‘strongly agree’ in question 1 should, ideally, respond with ‘strongly agree’ in question 2, 3 and so on. To check if this is true, you should calculate Cronbach’s alpha score for these variables. This is a metric ranging from 0 to 1.0, and the higher it is the more homogeneous your composite scale. If the Cronbach alpha is low, then you can try removing one or more of the questions from the composite variable, and see whether it is improved. Again, SPSS will calculate this metric for you, and it will tell you what the alpha would be if you removed any one question. (There are also more sophisticated methods for establishing whether the ten variables ‘cluster’ in one or more groups, but let’s not go into that now).

Once you have created the new composite variable, and calculated the central tendency for each respondent, you’d need to compare the responses of the two groups. Although each variable produces ordinal data, it has been argued that the composite variable may have properties of ‘interval’ data. This is controversial, and depending on where you stand on this question you’d work in slightly different ways. If you treat the data as interval, and the distribution of responses is normal, you could use an independent t-test to compare the responses of the two groups. This will tell you the difference between the ‘average’ response in each group, and whether it is statistically significant. If you still treat the data as ordinal, you’d run the Mann-Whitney U-test, which does pretty much the same thing.

Dear Achilleas,

Thank you for your comprehensive reply.

To answer your first question: you are right about the set of ten questions and the sample of twenty participants, who is divided into two groups (supporters of different political parties). Furthermore, I do have a hunch that the responses given by your participants are different depending on which group they belong to.

However, since the sample size is really small, I think that testing this statistically does not make any sense. For this reason, I am thinking of ‘just’ comparing the medians of the ‘happiness’ constructs.

However, in light of using median values, I do not know how to deal with the fact that the ‘happiness’ construct is measured by means of 10 variables (questions). I forgot to mention that a factor analysis already proved, convincingly, that this set of 10 questions indeed load on one factor.

One way to deal with the fact that ‘happiness’ is measured by means of 10 variables (questions) is, as you mentioned, creating one super-variable (‘happiness’). However, I am reluctant to create this variable because there is an extensive discussion about whether you can create this variable in an ordinal-scale setting.

My solution, therefore, would be to rank the responses (of one group) to all 10 questions, and, subsequently, calculate the median value (out of 10×10=100 values). The next step would be to calculate the median value of the other group’s responses. In the final step, I would compare the median values of both groups (knowing that I cannot say anything about significant differences because I am not applying any statistical tests; the median comparisons would merely serve as an indication that a difference might exist).

What do you think?

In my last post I forgot to thank you in advance, so, hereby, many thanks in advance.

Kind regards,

George

Hi again,

You can certainly just compare the medians of the two groups to find out some insights about your sample. What a statistical test such as the x-squared would tell you is whether these insights can be projected from the sample to the population, and whether Of course, you are right in pointing out that the modest size of your sample would make such projections difficult.

Another thing to bear in mind is that calculating a median condenses, and to a certain degree, distorts data. So, for instance, if one group was polarised (lots of extreme views) and the respondents in the other group were all clustered around a central value, a comparison of the means would mask this difference.

So, I guess that what I am trying to say is that what you are suggesting does make sense, but I recommend that you also consider the Mann-Whitney U-test.

I am pleased to get answers for my questions though i was not in a position to ask via this website. What i would ask one more question is that, is there a possibility in SPSS to add sub variables to make it one super variables in-order to calculate the degree of agreements made by respondents on average?

Yes, you can merge variables in SPSS. Here’s one way to do it: http://www.ehow.com/how_8453917_combine-variables-spss.html

Hey Achilleas! Have a few questions for you on my survey data. My survey uses a Likert Scale for questions that are supposed to determine the commitment level of the respondents based upon hypothetical situations. My survey has the respondents go through a scenario and then asks them something like “after reading the above scenario, you would be ________ to take another position at another company.”

1 Highly unlikely

2 Unlikely

3 Neither

4 Likely

5 Highly Likely

My question is what is the best method to analyze these types of questions. The mean doesn’t tell me much and the median is almost always 3, except for a couple of questions.

I do have an abundance of demographic data, but that is not the main focus of the study. I am a new researcher and to be honest a little bit lost. Thanks!

That’s not very encouraging… The kinds of data analysis procedures you’d use depend on your research questions. Perhaps all there is to say is that, based on the data you were able to collect, in most cases people are undecided as to whether such-and-such a scenario would affect their career plans. If you want to push further, here are two options you might consider:

a) A median of 3 could indicate many different things: it might mean that most respondents selected the middle option, or it could mean that there are equal numbers on either end of the scale. Maybe the InterQuartile Range could give you a clearer insight into what your sample thinks.

b) It may be the case that, while there are no clear patterns in the entire sample, there are differentiations according to population segment. Maybe more experienced respondents are more likely to give a firm answer compared to people at the start of their career… If you are doing exploratory research, maybe you could try doing some cross-tabulations using the demographic criteria, and see if any patterns emerge. As I said before, whether you ‘have to’ do this depends on the scope of your research questions.

Best of luck…

Hi Kostoulas,

one simple question, to verify how youngs sees the social and political participation, using a likert scale, what kind of analysis do you think will give more reliability: a rating average (X1W1+…+XnWN / TOTAL) or a Mode?

I mean, sometimes I think the Mode, or the higher concentration of answers, will be more clear, but almost sites, like SurveyMonkey, suggests the use of the rating average… And in my research these two shows differents answers in every variable.

I curse myself every time I remember that I slept in class statistics. LOL

I think the median is your best choice for individual items.

But, what kind of factor do you attribute to the difference between the rating average and the Mode? Why the two methods give different results for the same data?

Each set of data can have up to three different types of ‘average’ (measures of central tendency, to be more technical): the mean, the median and the mode. These are calculated in different ways, and are likely to be different. Sometimes, the data are evenly distributed, and in such a case it happens that the three measures coincide. On other occasions there may be unusual features in the data, such as an outlying value, which result in large differences across the mean, mode and median. It all depends in your data.

Hi,

Doing a survey on community residents in 17 different areas that represent the North, East, West, and Southern portions (4 areas) of my state. These are anonymous surveys that are asking questions based on a 10 question 7-point Likert scale. The scale is as follows: Strongly disagree = 1,2,3; Neither = 4,5; and strongly agree= 6 or 7. I do not know how many people will actually participate or decline the voluntary survey. I had plans of keeping a tally on the total number people asked and the total number that declined so I could be aware of how many people in one area were polled. So this is most likely going to be a very large data set.

What I do need to know is once I have all the data and total the responses? Just presenting the results is not enough. In short, what other statistical functions should I perform once I have the results? What types of tests should I perform to compare all 4 areas to each other? And are their any other statistical analyses that need to be done to further substantiate the validity of the findings?

Statistics has never been my strong suit. And I would appreciate any advice. Thank you!

Hi,

The types of procedures to run depend on your research questions, i.e., what it is that you’re trying to find out.

One thing you will want almost certainly want to do is calculate the median and interquartile range for each item. The post above has some advice on how to do this.

You may also want to estimate Cronbach’s alpha to see if the ten questions tend to yield similar responses. If they do, it might make sense to conflate the data from each the ten questions into a single score.

If you want to compare the results across the four geographical areas, the Kruskal–Wallis test can tell you whether the differences your observe are statistically significant. Alternatively, you might be able to use a chi-square test. This is a useful resource, to guide you in selecting the best procedure.

Best of luck with your project!

Achilleas,

I am just trying to determine if there is a difference in community perception based on the geography of where they reside. Then relate that data to theories that correspond to different government administration practices.

Thanks for the advice. It is much appreciated! I feel less lost!

Hi Sir,

Please help me out with the best way to analyse and interpret the findings of the question:

11. Rate on the scale of 1 -5, your preference/common purposes for mobile phone usage, where 1 is for most preferred purpose:

Talking with friends/family

Social networking

Playing games

Check email

Listen to music or radio

Others (please specify)

70 respondents (33 females, 37 males) results have been recorded in EXCEL. Please let me know what best information can I get from the data.

As stated in the post above, you can calculate the median and interquartile range for each response. You can also display the results in a bar chart. The types of analysis you use depend on your research question.

SO D*MN LUCKY I FOUND YOUR PAGE SIR!

Tons of thanks for ALL these juicy information about Likert, Spearman, and ordinal data.

My Problem is this:

My study ought to determine the EFFECTIVENESS of a particular radio program (let it be : “Program X”) in ENFORCING its ADVOCACY.

Now, by that, I mean to test it via “relationship” of:

(A) effectiveness of advocacy integration in terms of its radio programming, and;

(B) effectiveness of advocacy promotion in terms if the radio DJs’ performance.

I used a 3-point Likert Scale “3-agree” “2-disagree” and “1-undecided”.

(my survey questionnaire validator highly advised it to be changed into 3-point from my original 5-point scale “5-strongly agree” “4- agree” “3-undecided” “2- disagree” “1-strongly disagree” because my questions are highly specific already.]

Would you be so kind Sir to help me out with an appropriate statistical treatment and interpretation technique if possible?

I plan to test the effectiveness of each variable A and B first by determining the MODE and RANGE (?) [or inter-quartile range?]

Then, I plan to use SPEARMAN rho for the correlation of A and B.

would these work? could you please enlighten me more?

Please bear with me T_T

Great Thanks!

What you propose sounds ok, as far as I can tell. The range and mode of each item can give you some insights into individual variables (you can also use the median, instead of/in addition to the mode). Spearmans rho will then tell you if the two variables (a & b) are linked.

Thank you for the quick response Sir!

Can I ask few more things if you don’t mind? :)

1. My respondents are only 30 in total and I am afraid this may result to unreliability due to very small sample size. With a 5% margin of error, do you think this QUANTITATIVE DESCRIPTIVE type of study would glean reliability?

2. Can you “teach” or at least provide some tips on how to establish or make a good data interpretation for my updated 3-point scale? My old version of 5-point scale is like this with 0.79 range:

Average Scale Interpretation

4.20-5.00 Very High Effectiveness

3.40-4.19 High Effectiveness

2.60-3.39 Average Effectiveness

1.80-2.59 Low Effectiveness

1.00-1.79 Very Low Effectiveness

The truth may sound d*mb but I actually don’t really know how to ADJUST the range of data interpretation scale with my updated 3-point scale. How do I adjust the range? I’m sorry could you help me out more?:(.

With just 30 participants, you will have to be very careful about the kinds of claims you make. Of course, that depends on the size of the actual population you are studying. In general, you should be able to estimate the statistical significance of the findings (the p value) using a SPSS or by looking it up in a statistic table (many statistics manuals have these in the appendices, or at least the older editions used to). Whether or not quantitative methods are appropriate is not a moot point, since you have already done your study. You just have to report your findings, and comment on whether they are statistically significant or not.

I cannot comment on how to transform the scale, because I do not know how it was constructed, or whether it was validated. At first sight, it appears that the range from 1-5 was divided into five equal segments, each covering .8 (i.e., 1/5th of the range). So, I guess that you could just do the same with a range from 1-3 (i.e., 2 x 1/5 = .4, hence 1.00-1.39, 1.40-1.79 and so on.) I am not sure whether you need such a fine-grained breakdown though. I would also remind you that your Likert-type items produce ordinal data, so normally you should

notcalculate median values (a.k.a. ‘average’). You should use the Median instead.I just finished tabulating my scores from the Surveys on Minitab but I have no clue what they mean. Could you please help?

Kruskal-Wallis Test on Likert Scores

Survey N Median Ave Rank Z

1 3 40.00 46.7 -0.26

2 48 38.00 45.9 -1.68

3 20 45.50 56.9 1.00

4 30 45.00 55.8 1.06

Overall 101 51.0

H = 3.14 DF = 3 P = 0.371

H = 3.14 DF = 3 P = 0.370 (adjusted for ties)

* NOTE * One or more small samples

Sorry, no. To be able to answer, I’d need to know what your research is all about, your research questions, how the data were generated etc. If you are in an academic programme, your advisor will be able to help you more.

Maybe my post wasn’t clear. To clarify: I’m not asking for answers or for your interpretation of my data. I’m using a statistical formula you suggested a couple of months ago. Now that I have my results. I’m asking what do the categories mean? For example: what does N, Average Rank, & Z represent in the equation. I’ve tried looking up examples of this formula everywhere but I’ve yet to find what these mean.

The test seems to be telling you that there are small differences in the distribution of each group, but there’s no way of knowing whether they represent real differences in the population or if they are just random fluctuations. There are two parts in the output: the table and the report. I will explain both, but the one you need to focus on is the report.

N is the number of participants in each of your groups. Of interest here is that group 1 has only 3 participants, which is something of a problem because the test will underreport significance when small (N<5) groups are included in the calculation. For each group, you are given a median and a mean (or average) and a Z value showing how much the median value of each group diverges from the median of all groups. This information is not particularly useful, and you can pretty much disregard it.

Below that, you are given an H value and the degrees of freedom (dF) for the data. In the olden days, you'd use these to look up the p value in a statistical tables. This tells you whether the results are statistically significant. Nowadays, the p-value is calculated automatically for you (p=0.371). This is way too high, because for the difference to be significant you'd need a value lower than .05. This information should go in your report, more or less like this:

“Using the Kruskal-Wallis test of statistica significance, it was established that the differences among the groups were not statistically significant (H=3.14, dF=3, p=.370, adjusted for ties)”.Thank you for the great article!

Sorry for my English, first.

Now I try to research methods used in schools of my country for parent satisfaction level estimation and find the best. I see, that common procedure is:

1. Parents fills form with 20-30 questions. The answer on each question is a number from 0 (bad) to 4 (excellent).

2. They make new composite variable. They calculate sum of all answers and divide by total count of answers (for example: 20 questions, 100 respondents: they make sum of all numbers in a table and divide by 2 000)

3. This number is “parent satisfaction level estimation”, they said.

I think, that this is some kind of mean and couldn’t be used. Is it right?

In this case, when we must aggregate few ordinal variables and have as result one number, what method of aggregation will be more appropriate? Some authors says, that “11 distinct points on a scale is sufficient to approximate an interval scale”, some said that the best way is the cluster analysis and centroid analysis (mean of centroid coordinates, may be).

More, I think that every question has a weight, so we must used it. But how?

Thank you.

1. Your scale is ordinal, so you can calculate a median and a mode, but not a mean. There is advice on how to do this here.

2. I am not sure what you mean by ‘every question has weight’.

3. It is true that an ordinal scale with many points (e.g., eleven or more) might approximate an interval scale, but since your scale only has five points, this is not relevant to your case. Cluster analysis might be a better option in this case, depending on what you are trying to find out.

Hi! I am doing an experimental study about the perception of students on two different different documentaries with different approach/style. I made them answer a questionnaire with a 4-point Likert scale. The thing is, how do I get to compare the results if I would not be calculating for the mean? The study aims to know which of the two documentaries affected them better, which of the two is less boring, etc. I have already computed for the IQR and Md but I don’t get how will I compare the same 0 IQR and the same 4 Md. Please help me. Thank you!

If the Median and the IQR are the same, there are no visible differences between the effects of the two documentaries. This might mean that they had the same effect or that your instrument was not sufficiently sensitive.

Hello Sir.Thank you for the thorough explanation and the linked articles.But I have a question.What if I have a total of 55 likert-scale items which are divided into 5 subsections and I have to use all items to determine general attitude whether it is positive or negative. Should I analyse each item one by one for its median and IQR. It is going to be messy.And how am I suppose to discuss it in my study and say/conclude based on all items,the attitude is therefore positive/negative.It is a 4-point Likert scales.Another statistician told me to analyse descriptively and find the mean.Then see whether the mean is above or below the midway point.If it is above,then I can say it is positive attitude and if it is below then I can say it is negative attitude.Could you please help me because I am confused.Thank you Sir.

Thanks for your kind comment. I would use the words ‘thorough’ and ‘transparent’ rather than ‘messy’ to describe this kind of detailed analysis. If you prefer to summarise many items into a single scale, you can find some instructions here. Good luck with your project!

Thank you Sir for your kind respond.Yes,it would be thorough and transparent but how do I conclude the attitude is positive/negative since there is no one specific table to show the final outcome,just one table for each item and overall there are 55 tables.It would be inconclusive and there could still be point of argument. I’m sorry if it is a silly question but I really need your expertise. I’ve looked at the article you linked and I think it is better to analyse that way because I have 5 subsections and I could generate new variables using those subsections. But when I compute new variable,this is what I get in the Output sheet;

GET

FILE=’C:\Documents and Settings\CompaqUser5\My Documents\Untitled2.sav’.

DATASET NAME DataSet1 WINDOW=FRONT.

COMPUTE Cognitive_Engagement=Median(q1,q2,q3,q4,q5,q6,q7,q8).

EXECUTE.

So how do I use it for my next analysis?There is no table which shows the median of the new variable. Please help me. I really need your help because your site is the clearest than other site.Now I have clear view about statistics and SPSS.

This is very helpful Mr. Kostoulas. Thanks for this. I have a question though.

My case is very similar to the one on your article. Our survey is also composed of items with a 5 point scale ranging from 1-strongly disagree to 5-strongly agree. I already got the mean and the IQR of my first Likert-type item, 3 and 2 respectively. I’m having problems with interpreting them. I don’t really understand IQR and how to explain it. What does getting 2 means? Or if I get 3, how do I explain that?

I have total of 100 respondents. And the items are statements (e.g. I often watch gay-themed films). I’m looking forward to your response.

Thanks for your kind remarks! Usually, the kind of dispersion you are describing is a sign that the sample is polarised. This means that respondents tend to have very strong views either for or against the statement. A bar chart can be useful in visualising this: it’s likely that there will be concentrations of responses at the edges of the scale, and few responses in the centre.

A high IQR also alerts you to the fact that the median is likely a misleading metric (e.g., if you have 50 respondents who strongly agree and another 50 who strongly disagree with a statement, this doesn’t mean that, on average, they are undecided/indifferent).

So, when reporting your data, you may want to foreground the dispersion of responses, rather than the median. One way to do this is: “Item x indicates that opinion is divided regarding issue y (mdn=…, IQR=…). Most respondents stated that they disagree or strongly disagree (N=…), but a large number of respondents indicated agreement or strong agreement (N=…).”

Thank you, Sir!

Dear Achilleas,

First of all i would like to thank you for the great article and all the useful information that you provide.

I have a question regarding my survey. I use a standarize likert type questionnaire in order to measure the usefulness, ease of use, ease of learing and satisfaction for my website.

Link to the questionnaire:

http://hcibib.org/perlman/question.cgi?form=USE

For its variable (usefulness etc.) there are a set of questions. I also have a number of participants (N) who answered the questionnare. So what i am thinking to do in order to take insight about the perceived usefulness, ease of use, ease of learing and satisfaction is to:

calculate the mean for every single question in every group and then to calculate the mean of the means in order to to have a single value for every group (usefulness etc). So if this mean is above a threshold, it means that the perceived usefulness of the website is high.

Do you think that this is a good practic?

Thank you very much :)

George

Thanks for the kind words George! What you’re describing seems ok, although I would prefer to use the median rather than the mean.

Many statisticians strongly believe that Likert items produce ordinal data, so it would be wrong to calculate a mean value for them. This is a fairly contentious point, and you might want to stay on the safe side by calculating medians.

Here’s some advice on how to do this using SPSS: https://achilleaskostoulas.com/2014/12/15/how-to-summarise-likert-scale-data-using-spss/

Im learning how to use this likert sale and would really appreciate the assistance of a specialist. Im in the process of completing a case study where i must use the likert scale to analyse my information. The topic that i dealing with is misbehavior with in the class room. I trying to find information on teachers perception on this topic at grade eight level in secondary schools.

I am not sure I understand your question: do you want help with your statistics, or guidance compiling a literature review?

Hi my name is Eki, student from Indonesia

Im conducting a research titled “Critical Success Factor of Stakeholder Management in procurement phase in EPC projects”

I collected 48 variables from various sources such as Journal and Thesis. I designed the research with questionnaire containing likert scale 1 (not important) to 5 (most important) and asked 30 respondents to give their views on each variables.

What i want to ask is, how could i analyze which variable can be categorized as CSF? If using median, what is the parameter of the median?

Thx in Advance

Hi Eki! I don’t know enough about management to be able to advise you. The scale you’re describing seems like a generic interval scale, rather than a Likert item.

Hi,

I would be grateful if anyone can help me with my analysis. I conducted a study on the relationship between low income (independent variable) and stress level(dependent variable). I used 3 instrument to collect data 1. the biofeedback stress card 2. the perceived stress scale 3. a six item scale to measure financial stress. I have 134 respondents , the six item and the four item self report questioner provided ordinal data the biofeedback reading gave nominal data. I used Spearman’s rank correlation to look at the significance level between general stress and financial stress. I also used X-square to look at each item with in the questioner separately.My question is can I conduct descriptive statistics on the six item and the 4 item scale?

I would be grateful if you can help

Shereen

Depends on what you mean by ‘descriptive statistics’. You should use measures appropriate to the level of measurement.

Hello,

I am doing a satisfaction study based on ordinal data. Since my study deals with many variables and the links among those variables, i want to know how to treat the ordinal data. If i use median, will the median values be still considered as ordinal? And if that is the case, then i cant use one way MANOVA or any other techniques which is based on continuous data assumption. Kindly Suggest.

No, ordinal data do not yield medians or any statistical techniques that assume continuous measurement.

I LOVE YOU!

Dear Achilleas,

It is me again – you offered very kind help to me in some comments on another of your posts.

I have a couple of other questions as I draw to the end of my write-up. I’ve tried searching for the answer but it’s tricky to know what search terms to use.

1) The estimation of the population I’m working with (i.e. UK-based yoga teachers) is 10,000. I put this figure into an online sample size calculator along with a margin of error at 5% and a 95% confidence level. The resulting target number of participants for my questionnaire was 370 assuming a 20% response rate from 1850 invites. I achieved this, so that is no problem. My question concerns the next steps I took. I have interpreted my Likert-type scale data as ordinal and utilised Kruskal-Wallis test and the post-hoc Dunn-Bonferroni test to determine if there were any significant differences in the respondents’ median attitude scores between different dietary groups, and if so, between which dietary groups these significant differences were. SPSS states the results of the Kruskal-Wallis test and the post-hoc Dunn-Bonferroni test to be true at the 0.05 level. But can I assume that just because my sample size is sufficient for the overall UK-based yoga teacher population, that the number of individuals representing each dietary group within my sample is sufficient or representative of the wider population? My sample was sourced from Yoga Teachers UK Facebook group. I’m thinking that because of my non-bias source (i.e., the aforementioned Facebook group is a general group and not specific to any dietary group amongst yoga teachers) that I can assume that the proportion of my sample representing each dietary group could 1) be considered as representative of the wider UK-based yoga teacher population and 2) be considered of an adequate size. Do you agree? Is my interpretation correct?

2) Secondly, as I assumed a 95% confidence level when using the online sample size calculator, I thought that all statistical tests I performed would be valid at the 0.05 level, which was the case for the tests I detailed in point one above. However, I used the Spearman’s rho to test for a correlation between a set of beliefs and a set of attitudes and SPSS told me the results were true at the 0.01 level. Should I ignore this and still interpret the result as true only at the 0.05 level (as SPSS does not know the confidence level that I set on the sample size calculator)?

So many thanks to you for any advice you can give 😊

Jenny 😊

Hi Jenny,

Good to know that your project is moving along so well. To start with your second question, a p value of 0.01 is actually even better than 0.05 (the lower the value is, the better!). This means that if something is valid at the 0.01 it is also valid at 0.05. So no problems there.

The answer to the other question is slightly more complicated: Strictly speaking, your sample was self-selected, i.e. not truly random and this means that most of the claims about generalisablity have to be made with a certain degree of caution. This doesn’t detract from the vaule of what you did and found, it’s just a feature of most questionnaire-based studies. What you want to do, in the write-up, is demonstrate an awareness of the limitations, and at the same time enough confidence in the value of what you did.

One way to do this in the write-up, is to repeat all the information that you just shared, explaining why you think that the sample gives insights into each dietary group, and list all the steps you took to ensure generalisability / external validity. Towards the end of the thesis, you will likely add a couple of paragraphs discussing the limitations of the study, and there you can say something along the lines of ‘of course these findings need to be interpreted with caution, because of the sampling strategy which… Although the findings convincingly show that… we cannot assume that they prove… however, they do have the potential to inform…”

Hope that helps!

Dear Achilleas,

Thank you so very much. That is super helpful.

I wish I could offer you some help in return :)

Many, many thanks :)

Jenny :)

I’m glad that was useful! Best of luck with your project :)