I definitely would not treat it as an interval scale. Can it be done? A quick search shows that some researchers do, but the statistics are likely to be misleading. Why not consider methods better suited for ordinal scales? There are quite a few.
I thougt it is an ordinal scale, but some research results, where analysis enclosed mean with standard deviation, was for me misleading. Therefore I asked a question to be sure and in order to use a proper non-parametric methods.
a lot of researchers report mean and standard deviation where it makes pretty little sense. In a lot of publications you will find analyses that are neither suited for the data nor to answer the scientific question. This is a very common problem, including high-ranked journals like Nature, Science, Cell, ... Hence: From the viewpoint of "good scientific practice" do not look what others did - instead think yourself and find a solution by yourself (you may discuss it with a statistician!). Even if you look around, just take it as "options to think about" but not as "probably suitable solutions", and do not restrict your mind to what you find when reading publications. From the viepoint of "career progress" it is sufficient to look what others do and follow the mainstream. It may not be sound, but it guarantees least problems getting your stuff published.
I am not sure what you mean by "non-parametric methods". The stochastic part of ordered logistic regression models is the binomial distribution model, and this has a parameter (a proportion value).
As I have read does the numerical pain scale range from 0-10 (it's not clear if the scale is discrete or if fractional number are allowes, too, since the patients can draw a mark at any position on the line, also between two marks):
Treating such data as ordinal is not the best strategy. Due to the high "granularity" of the scale such data may better be treated as quasi-intervall-scaled. So here is a trade-off: having too many (ordinal) levels makes any analysis uninterpretable, reducing the number of levels will waste information (what can be worth it, but what can also be very bad), and treating the data as intervall-scaled can (strongly) distort the meanings of the results. It is hard to say which option will be "best", and it depends on the substance matter and the experience of the experts.
However, *if* you decide to treat the data as intervall-scaled then you should also acknowledge the fact that the data (or the residuals from some model) won't have a symmetric distribution, because the values are bounded (in the interval [0,10]). So it is highly recommended that you use an appropriate error structure (and the normal probability model is NOT appropriate here). One possibility I am aware of is to re-scale the values to [0,1] and then assume beta-distributed errors (-> "beta-regression"). As a work-around you can logit-transform the re-scaled values and use a normal regression model.
Thank You very much for such a quick and exhaustive answer. I will discuss this with the researchers as they should be aware of the possibilities methods of further results analysis.
Aleksander, you may be interested in Ronán Conroy's Stata Journal article in which he argues very convincingly that the Wilcoxon-Mann-Whitney test is not properly called a nonparametric test: In fact, it does estimate a useful parameter, viz., the proportion of cases in which a randomly sampled score from population 0 is higher than a randomly sampled observation from population 1. HTH. ;-)