Theory says that correlation between -0.2 and 0.2 is barely existing (if existing at all) and SPSS says that 0.162 Spearman is a significant correlation at the 0.01 level (2-tailed). What am I missing?
The meaning of p-value is changed by case number. You do not trust the "Theory says that correlation between -0.2 and 0.2 is barely existing (if existing at all) ". The author does not understand an introductory of inferential statistics. If you test 10 cases, the p-value is changed.
Yes, it is useful to re-visit the definition of p-value. A p-value tells you nothing about the size of the effect (r, in this case).
The p-value says only: You have a null hypothesis, "There is no correlation" or "r is equal to zero.". It then asks the question: Do you have enough evidence to reject this null hypothesis?
If the sample size is large, even a relatively small effect can garner a small (significant) p-value.
So, if your result is statistically significant, the next issue is determining the ecological or practical importance (strength) of the association identified between the variables: this is the point of the "rules of thumb" given for interpreting r (e.g., "barely exists").
Reporting the confidence interval is often also useful. As far as relate the size of r to its importance, it depends on the context. Importance, effect size, and achieved significance level are all very different things. The attached plot I published (I think with clearer fonts) on interpreting an effect from the statistical output. Importance requires more context so it is not part of this scale.