Hi! Have you used the Student-Newman-Keuls (SNK) test or the Tukey(HSD) test? I am not sure what would be the next steps or the explanation for this, but I would first try the Tukey test and then check if the SNK does not show any differences just to be sure.
When reporting ANOVA results in APA format, you should include the following information in parentheses: the degrees of freedom between groups and within groups, and the F value (also known as the F statistic or p value). For example, if you were reporting the results of an ANOVA on reading ability scores, you might write something like this:
“An ANOVA was conducted to compare reading ability scores between groups. Results indicated a significant effect of group on reading ability scores, F(2, 27) = 4.32, p < .05.”
If your ANOVA results show significance but post hoc comparisons do not show any significant pairwise differences, you should report this finding in your results section. You may also want to discuss possible explanations for this result in the discussion section of your paper.
As for what should be done further, it may be helpful to examine the data more closely and consider other factors that may be influencing the results. It may also be useful to consult with a statistician to determine the best course of action.
Try to give more power to your post hoc test: more homogeneity and normality, and bigger and comparable samples' sizes. You can also consider joining some of your samples if they're too small. Non parametric tests are less powerful than parametric ones, so I don't think using them will provide any better results
To the generally very good advice given by Lyudmil Antonov , I'd recommend that you report the computed p-value for the anova F-ratio (e.g, p = .0391, or whatever the value was) rather than the largely uninformative "p < .05" form.
Why could an overall anova yield a significant result, but pairwise post hoc tests fail to identify any differences?
1. Each tests a different hypothesis. The overall anova tests the hypothesis that no linear combination of means yields a difference other than zero. The pairwise tests compare only two of those means at a time. It is possible that the nature of the true difference in the population is not pairwise in form. For example, it could be that (Mean2 + Mean3)/2 is different from Mean4, and no other difference exists. Your garden-variety pairwise comparisons would fail to detect this.
2. Each has a different sampling distribution (and, therefore, a different critical value for a given alpha level). The one-way anova sampling distribution is F(k, N - k), whereas the pairwise post hoc tests can be evaluated as t-tests with different error df (depending on choice of test).
3. Most post hoc tests, to avoid unnecessary inflation of the aggregate risk of Type I error, are more conservative, and therefore less powerful tests. Perhaps the "worst" of the lot is the Scheffe post hoc test, as regards power. The "best" of the lot, regarding power, would be Fisher's Least significant difference, which requires an initial significant result from the anova to justify running.