Both variance and SD (square root of the variance) are measures of dispersion (scatter) of data from a center value, say, mean.
Measures of central tendency like mean, median and mode represent the point, where data is centered. [For symmetric data, mean, median and mode coincide at a point] But not all the data points lies on this measure, say, mean. Some observations are larger and some are smaller than mean. It means the data is scattered around mean (or other measure of central tendency).
Now measures of dispersion shows how far the observations are from mean (or other measure of central tendency). Variance is most commonly used measure of dispersion.
Mathematically, it is the average value of square of deviation of observed value from their mean. So, first we calculate deviation of observed value from their mean (difference between observed value and mean of the data set). It may be positive, negative or zero.
Then, we square it (all positive) and take it's average, which is known as variance.
Similarly, if we consider the average value of absolute deviation of observed value from their mean, it is another measure of dispersion known as mean absolute deviation (MAD) about mean.
Both variance and SD (square root of the variance) are measures of dispersion (scatter) of data from a center value, say, mean.
Measures of central tendency like mean, median and mode represent the point, where data is centered. [For symmetric data, mean, median and mode coincide at a point] But not all the data points lies on this measure, say, mean. Some observations are larger and some are smaller than mean. It means the data is scattered around mean (or other measure of central tendency).
Now measures of dispersion shows how far the observations are from mean (or other measure of central tendency). Variance is most commonly used measure of dispersion.
Mathematically, it is the average value of square of deviation of observed value from their mean. So, first we calculate deviation of observed value from their mean (difference between observed value and mean of the data set). It may be positive, negative or zero.
Then, we square it (all positive) and take it's average, which is known as variance.
Similarly, if we consider the average value of absolute deviation of observed value from their mean, it is another measure of dispersion known as mean absolute deviation (MAD) about mean.
Though both are the measure of variability, SD is generally used as descriptive analysis tool for describing the nature of data whereas analysis of variance (either one-way or two-way) is commonly used as an inferential analysis tool for determining whether there is a significant difference in two or more independent groups.
All the answers are good but i want to add something, actually variance is called as the area measurement while as standard deviations is called as length measurement, while comparing the data it is better to compare with similar parameters not with squares. hope it will help you
Quite often, variance is used to mean the square of a standard error, not the square of the standard deviation of the population. So be sure you are clear on the difference between the standard deviation of a population, and, say, the standard error of a mean. The first is a fixed value, whereas the second is reduced with increased sample size.
Sigma is used for the standard deviation of a population, even for estimated residuals in regression (or the random factors of the estimated residuals for weighted least squares regression). A population's sigma (standard deviation) will generally be estimated better with a larger sample size, but it is a fixed value.
Cheers - Jim
PS - If someone says "variance of a population," then they must mean the square of the standard deviation of the population. But if they just say "variance," then they may mean the square of the standard error of a parameter (or random variable in prediction), and in my experience, the latter is usually what they actually do mean.