There’s a very nice description on Wikipedia at https://en.wikipedia.org/wiki/Variance. But for an intuitive feel, variance is a measure of the amount of variation in your measurements. The higher the variance, the more variation in your data. It is completely different from the mean, which is a measure of central tendency of your data—central tendency being the fancy way of saying the center of your data.
The two numbers together, mean and variance, are sufficient to make all sorts of inferences about your observations. For example, if you know the mean of two samples and their respective variations, you can make inferences about whether or not the samples were drawn from populations with the same mean, or if they were drawn from populations with different means. This is the basis of the t-test (which for most people is where we start to understand mathematical inference).
variation in data is due to the effect, on your measured response, by the changing in levels of those input factors that influence your response output but cannot/ not controlled during the experimentation process.
There’s a very nice description on Wikipedia at https://en.wikipedia.org/wiki/Variance. But for an intuitive feel, variance is a measure of the amount of variation in your measurements. The higher the variance, the more variation in your data. It is completely different from the mean, which is a measure of central tendency of your data—central tendency being the fancy way of saying the center of your data.
The two numbers together, mean and variance, are sufficient to make all sorts of inferences about your observations. For example, if you know the mean of two samples and their respective variations, you can make inferences about whether or not the samples were drawn from populations with the same mean, or if they were drawn from populations with different means. This is the basis of the t-test (which for most people is where we start to understand mathematical inference).
An often used illustration of the concepts of variance and bias in two dimensions is that of throwing darts at a dartboard. The distribution of your results may center around a point substantially different from the "bulls eye." That is indicative of your bias. The "spread" of your results is due to variance.
Here you could say you are looking at a one-dimensional dartboard, and your data fall on a number line going through the center. Here you are only interested in those data, going through that center, and how spread out the points are found. This is your distribution. Because it is your representation from a population, the distribution may be highly asymmetric. The standard deviation (square root of variance) is one piece of information, but often people may confuse it in general with standard errors, which are a special case of standard deviation, for a distribution of means, not the population distribution itself.
So, variance helps describe your population distribution, along with the mean, and other "moments" which include measures of skewness and kurtosis which can help describe the 'shape' of your population distribution. (Note that your sample is sometimes not that representative of your population, and this is most likely with small samples from large populations.)
Variance of a set of numerical observations is defined as the arithmetic mean of the squares of the deviations of the observations from their arithmetic mean.
On the other hand, standard deviation of a set of numerical observations is defined as the positive square root of the arithmetic mean of the squares of the deviations of the observations from their arithmetic mean.