It does not make much sense to test assumptions. Assumptions should be plausible. If you don't know from theory or from common sense that an assumption might or should be plausible for your data, then plot the data and look at it. If it screams that an assumption is definitively implausible then you should re-think your model.
Tests on assumptions are sensitive to deviations from the ideal, and the p-values you get may be too large for small data sets (too low "power") to alarm you about some relevant deviation, and on large data sets the p-values will always be very small, no matter if the deviation is of any relevance. Using a criterion like p>0.05 to conclude that the assumption is met (or reasonable well approximated) is plain nonsense (sorry). It is not becoming a right or sensible thing to just because it is chanted over and over again in statsitics courses and books (sadly). Statisticians have warned more than once that "non-significant p-values" (from significance tests) do NOT justify ANY conclusion - but here this is just the basis of all argumentation: "non-significant" results are deemed as indication of "permissible data" or "meeting the assumptions". That's a bewildering logic.
And another remark: if you use your data to decide what test (or transformation) you will do, a p-value calculated from this very same data using the determined test or transformation has not anymore the intended meaning! If your aim is to calculate a valid p-value, you must either blindly rely on plausible assumptions or you must find out what analysis may be appropriate with one set of data and then calculate the p-value on a different, independent set of data.
It does not make much sense to test assumptions. Assumptions should be plausible. If you don't know from theory or from common sense that an assumption might or should be plausible for your data, then plot the data and look at it. If it screams that an assumption is definitively implausible then you should re-think your model.
Tests on assumptions are sensitive to deviations from the ideal, and the p-values you get may be too large for small data sets (too low "power") to alarm you about some relevant deviation, and on large data sets the p-values will always be very small, no matter if the deviation is of any relevance. Using a criterion like p>0.05 to conclude that the assumption is met (or reasonable well approximated) is plain nonsense (sorry). It is not becoming a right or sensible thing to just because it is chanted over and over again in statsitics courses and books (sadly). Statisticians have warned more than once that "non-significant p-values" (from significance tests) do NOT justify ANY conclusion - but here this is just the basis of all argumentation: "non-significant" results are deemed as indication of "permissible data" or "meeting the assumptions". That's a bewildering logic.
And another remark: if you use your data to decide what test (or transformation) you will do, a p-value calculated from this very same data using the determined test or transformation has not anymore the intended meaning! If your aim is to calculate a valid p-value, you must either blindly rely on plausible assumptions or you must find out what analysis may be appropriate with one set of data and then calculate the p-value on a different, independent set of data.
To illustrate the great (as usual) answer of Jochen, please find attached a picture I did for a question close to yours (https://www.researchgate.net/post/Why_the_assumption_of_normality_of_residuals_ANOVA_is_still_violated_after_the_log_transformation?_tpcectx=qa_overview_following&_trid=1Zw8obCYZADLZZKu8zU0bsM6_).
This is for normality assumption, but I think it shows well that we should better look at data than test them.
Cyril, the post you refer to is about 1000+ data points. This is a beautiful example of a case where a test (on read data) will always give very low p-values, no matter how irrelevant the deviation of the empirical distribution from the ideal(!) of a normal distribution is.
Jochen has given you excellent advice. I want to follow up on this point that he made:
" If [a plot of the data] screams that an assumption is definitively implausible then you should re-think your model."
You asked about homogeneity of variance. If it really is an implausible assumption, and given that you appear to be using SPSS (you listed it as a topic), consider estimating your model via the MIXED procedure. It allows you (via the /REPEATED sub-command) to model heterogeneous variances. Google to get started.