Every tutorial and guide I can find for scale analyses in SPSS are specifically about Likery Scales. My study is not making use of a Likert Scale and is instead using a 0 - 100 scale.
Whats reliability analysis is best used for such a scale?
There are many ways to estimate reliability. They differ, first and foremost, with respect to what dimension(s) one is considering for appraising score consistency.
These can include consistency over time (as in test-retest); over trials, items, or stimuli (as in internal consistency); over conditions, raters, or any number of other such dimensions; or combinations of dimensions.
I presume you're asking about internal consistency reliability estimates. If your measure has multiple items that belong on the same scale, the spss subprogram, reliability (Analyze/Scale/Reliability analysis), offers by default Cronbach's coefficient alpha, which will serve your needs.
Items that are measured on a 0 - 100 scale might make it even easier to assess reliability because they can (potentially) be treated as (quasi-)continuous (metrical, interval scale) variables, whereas Likert items are, strictly speaking, only ordinal in nature, often requiring special treatment in psychometric analyses.
If you have multiple items that are supposed to measure one or more factors/latent variables, the best course of action would be to run a confirmatory factor analysis (CFA) with the items as indicators of one or more latent factors to test the hypothesized factor structure first. If you find that the hypothesized factor model fits your data well/is appropriate, you can directly use the reliability estimates that are provided as part of a CFA (R-squared values for the items). In addition, composite reliability indices (reliability of the aggregate [sum or mean] of the items for a given factor) can be inferred as well from CFA. Depending on the assumptions made in the specific factor model, this may be, for example, Spearman-Brown, Cronbach's alpha, or McDonald's Omega.
I don't think there is anything magical about using a scale that runs from 0-100 rather than 1-10 or 1-5 etc. In particular, I disagree with Christian Geiser that using an extended range increases reliability -- the format is still ordinal.
So, if there is a single item, there is no way to assess its reliability, but if there are several items, then they can be tested as a scale in the usual fashion using coefficient alpha, exploratory factor analysis, etc.
David L Morgan I'm not saying that the extended range necessarily increases reliability. What I meant to say is that the extended range makes it (potentially) more plausible/appropriate to treat these items as (quasi-)continuous variables and apply factor analytic methods and reliability estimation techniques that are designed for metrical/continuous observed variables. For example, maximum likelihood (ML) estimation (which is standard in confirmatory factor analysis) requires a multivariate normal distribution. A 4-point Likert variable by definition cannot be normally distributed. A 0 - 100 variable has a better chance of approximating a normal distribution, making ML estimation more appropriate.
In summary, what I was trying to express is that I feel more comfortable applying psychometric techniques designed for continuous/metrical response variables/items to reasonably symmetrically distributed 0 - 100 variables than to 4-point Likert items.
I would say that 4-point LIkert-scored items are a low standard. Personally, I consider 5 points a minimum and 7 points preferable. But I certainly don't believe that people are going to waver between scores of say 63 versus 64 on a 100 point scoring system, which makes it hardly worth the trouble.
Maybe I'm missing some major trend in measurement, but if 100 point scoring systems really had added value, wouldn't they be in widespread use by now? Can it be that no one has ever experimented with this before?