If the concern is only reporting, it should be as it is obtained,
Example- Suppose we are comparing the mean body mass index in three age groups, say, A, B and C. and the difference was not significant for overall analysis but poct hoc test shows the significant pair wise difference. Then we can report that the mean BMI was not significantly different across the age groups but it was difference in mean BMI between age groups A and B, and A and C was statisticaly significant.
Several discussions on related topic are available on internet,
What type of post-hoc test did you use? How are you comparing groups?
If you have groups A, B, C and D, the post hoc tests will compare every single pair of groups you have. This is where the inflated family wise error rate Charles mentions comes from. If all you are doing is comparing a control (group A) to 3 experimental groups (B, C and D), then you use a different post-hoc test method.
I think Vinay's suggestion is very practical and good. And it is wise to consider Charles' and Andrew's points. I suggest that your review your computations and try to find out if there are matters that you have over looked.
I would advice not to interpret the post-hoc tests in this case. To me it seems strange to consider both the overall differences between groups, and then if there is no statistically significant differences to go further to search for differences.
Perhaps the procedure makes sense for you partcular question, but in general I doesn't recommend it.
dear frederik, I agree with you completely. If the overall anova test is not significant than you have to interprete that there are nor differences between the groups and must not perform post hoc tests
A conservative approach would be to conduct (and interpret) post-hoc analyses only when the overall omnibus F is significant. There are, however, several benefits associated with post-hoc tests -- including contributing to one's (after the fact) understanding of a phenomenon (which can then inform the development of future a priori tests).
I have the same situation: main test fails but Tukey's gives significant differences. p-value in two way anova is non-significant but the graphic output shows clearly different confidence intervals between the groups which in tuckey are different. And that's why I don't understand why the p-value in two-way anova is not significant.
Ok, I understand my analysis ends here... its just that I find strange that, when looking into the ANOVA graph, the confidence intervals are completely dfferent, not even touching one another... I always thought that that, for itself was a sign of significant differences. But ok. I think I will do as you suggest. I will try a T-test just to be on the safe-side:)