Standard deviation has been accepted universally as the ideal measure of dispersion of data.

However, standard deviation, though regarded as ideal measure of dispersion, is not based on the deviations between all pairs of observations.

A measure of dispersion can, in the true sense, be regarded as the proper measure of dispersion if the measure is based on the deviations between all pairs of data.

If a measure of dispersion is defined on the basis of the deviations between all pairs of observations then logically it should be a proper measure of dispersion.

The point in this case is whether it will be better than standard deviation as a measure of dispersion.

Therefore, the question is

" If a measure of dispersion is defined on the basis of the deviations between all pairs of observations, will it be superior to standard deviation ? "

More Dhritikesh Chakrabarty's questions See All
Similar questions and discussions