When SPSS calculates the expected values (EV) for a Chi Square Test of Independence, it’s giving you one of the key steps along the way to calculating your chi ratio and p-value. The link below shows you how to calculate a Chi Square by hand (note step 4). More conceptually, you can think of the EV table and the null hypothesis that your actual observations are being compared to. Notice how, after you find the EV’s, you subtract each of your observations from its EV and square it; it’s like the Pythagorean Theorem because you’re finding your observations’ distances from the EV (null). The larger those differences, the larger your chi square statistic, and the smaller your p-value. More practically, you can compare your observation table to the EV table, and see the direction in which your results differ from the null. Best wishes with your research Witold. ~ Kevin
@Kevin, thank you for a very precise answer. However, as this is purely statistical issue, I reckon there is little of practical application while knowing expected values ...Is that correct?
I assume you are talking about Pearson's chi-square in the context of contingency tables. This is an approximate test, which means that the chi-square distribution with df = (r-1)(c-1) approximates the sampling distribution of the test statistic under a true null hypothesis. How good that approximation is depends on the size of the expected frequencies (E) and on the degrees of freedom. A summary of the conditions under which the approximation is good can be found on one of my web-pages (see link below). If the approximation is not good for your particular situation, you'll need to consider using some other test, or combining some categories with low counts, etc. HTH.
NOTE: In the foregoing, r and c stand for the number of rows and number of columns in the contingency table.
As for the practical answer, the "expected" value is literally how many instances/cases you would expect in each cell IF THERE WAS NO RELATIONSHIP BETWEEN THE TWO VARIABLES. That can be incredibly useful information.
For example, I have data from 800 university professors, half of whom are female (.50) In this data set, roughly 25% of the professors have achieved tenure. If there is no relationship between gender and tenure, I would expect half of that 25% to be male and half to be female - an expected count of 100 in each cell. However, in fact I observe 25 tenured female professors and 175 tenured male professors. The ratio of how far each observation is from what we would expect (if there was no relationship) is a useful talking point.
In general, the expected value is about as interesting as the issue driving the question of whether these two measures are related.