I would like a quick example of data analysis for a likert scale questionnaire, I will have two data sets from two groups that have answered the same questions.
"So, if researchers in your area routinely create scales by summing up a set of Likert-scale items, then you should follow their lead."
Although I well see the advantange of this advice I still have to point out that this is also one of the safest ways prevent any major progress in science.
I'd say: do not just follow the "reserachers in your area". You may have a look at what they do. But then first understand the method/analysis. And then think hard if this is appropriate (especially for your particular problem). Then think if there is possible a better way to adress your particular problem. And then you can decide what to do.
There's an entire debate going on whether you can use inferential statistical measures to analyze Likert data. Some of the purists out there would argue that Likert 1-5 data is discrete data. I can see their argument using the 1-5 scale.
However, I argue, and others heard argue, that:
1) if you expand the scale from 0-10, the distinction becomes finer, and
2) if you ask the questions that measure the level of agreement with a statement, meaning you ask for a information that theoretically is on a continuous scale (perceptions and attitudes on agreement probably isn't a finite 1, 2, 3, 4, or 5....the way someone agrees or disagrees for sure can be fractional....4.46) that
3) you have a set of continuous data, and therefore inferential statistics are appropriate.
The classic approach would be to sum the items to create a variable that would be arguably linear, which in your case you would then use is the dependent variable. For example, you could do a t-Test to examine differences between your two groups.
This approach assumes that all your items are measures ("indicators") of the same underlying concept. If so, they should be highly inter-correlated, which you can assess by calculating Cronbach's alpha, which you will find in SPSS via: Analyze > Scale > Reliability Analysis.
Note that purists will object to this approach because you are taking ordinal level data and summing it to create an interval level variable. The fundamental analogy here is to a bundle of sticks, each of which would be to weak on its own for your purposes, but together they are strong enough.
There is an enormous amount of technical literature devoted to this issue under the heading of "measurement theory," but you don't need to get into that, Instead, you need to adhere to the standards in your field. So, if researchers in your area routinely create scales by summing up a set of Likert-scale items, then you should follow their lead.
"So, if researchers in your area routinely create scales by summing up a set of Likert-scale items, then you should follow their lead."
Although I well see the advantange of this advice I still have to point out that this is also one of the safest ways prevent any major progress in science.
I'd say: do not just follow the "reserachers in your area". You may have a look at what they do. But then first understand the method/analysis. And then think hard if this is appropriate (especially for your particular problem). Then think if there is possible a better way to adress your particular problem. And then you can decide what to do.
Might you consider playing with your data? Statistics offers a huge variety of different approaches to data analysis. Some of them are clearly wrong given a specific application, but then there are a large number of other "not-wrong" choices.
So first do a literature search for Likert scale (I get 18,000 examples in a couple of seconds). Take a few examples from your field and run your analysis exactly as they describe. Then look at what others do from other disciplines, and try that. Pay attention to sample sizes, and any assumptions that are listed. Maybe even try going to the user manual for your favorite statistics package and try a few things. Now you can play because I would guess that you will favor some answers over other answers. Why are some approaches better than others? If I do an LSD test I get significant differences, but if I do a Tukey's test there are no significant differences. Why might that happen? Is a mean comparison procedure even valid? Do I get more from a cluster analysis, or maybe a discriminant analysis? Do I even have enough data to try this sort of thing? Do I only care that there is a difference between two groups or might I like to know how far apart they are? What are the data trying to tell me?
This approach takes a great deal of time. It is a different learning style than a class on 101 ways to do regression analysis. If you are out of time, you could consult a statistician. However, there too you will get an answer "do this." It will be based on the experiences of the statistician that you talk to. You will gain the benefit of their education and effort, but you will have to do some exploration on your own to develop your own knowledge base. Without that we all just become parrots.
I think of it this way: The type of experiment I design is a function of the number of ways that I can think of to analyze the data. If all I know is ANOVA and multiple comparison procedures, then all my experiments will fit this mold. If I know more, then I have more choices, or at least I will be able to ask the statistician a more sophisticated set of questions. We might end up in the same place anyway, but there is a small chance that something new will arise.
you have an ordinal data so you can go for Chi-Square for comparisons and you can use t-test for comparisons between the means of two different groups. if you need help for doing analysis i can help