It usually happens when data is analyzed by independent T-test, results are significantly different. However, ANOVA with post hoc gives P value way different than T-test and non-significant. What would be better tool to analyze any data?
Tukey's HSD controls the family-wise error rate (FWER), individual t-tests don't. So if you want to control then FWER, you must use Tukey's HSD. If not, then individual t-tests may still not be a good option. It's almost always better to use a pooled variance estimate. Such tests are known as Fisher's LSD. If you think that pooling variances is not ok for your data, then you should ask yourself is comparing means (and doing t-tests in general) really makes sense.
The Student's t-test is of course used to compare two means. As soon as you increase beyond two (i.e. compare three or more means), then ANOVA using F values is the more rigorous method. Multiple t-tests between more than two means is not well supported, because, as Jochen indicates, the pooled variance cannot be assessed. It is also worth making sure that your data is 'normal' in the sense that it is gaussian - if it strays too far (i.e. is skewed) then you will not get rigorous results.
You got me wrongly. T-tests compare means. If you have more than two groups, and you want to compare their means, you will just have a couple of t-tests. ANOVA is not comparing means. Instead, ANOVA is comparing models where several coefficients are restricted together (it reduces to a t-test when only a single coefficient is restricted). That's something different. The ANOVA may or may not be "significant" -- you then still don't know if the data is sufficient to claim a mean difference between any two groups.
And another, important correction: the pooled variance can be assessed, and it should be assessed. If there are several groups (and the typicaly assumptions - normality and variance homogeneity are reasonable), the t-tests should be done using that pooled variance. That's what Fisher's LSD does.
---
PS: Using ANOVA to control the FWER is discussed at length in the literature. Above, I am takling about the t-tests, not about the control of the FWER. Don't get confused because Fisher's LSD is (was, in ancient ages) often used in combination with an ANOVA in an attempt to control the FWER (doing forther t-tests *only* when the ANOVA ist significant). That is shown to be ok for exactly 3 groups but not more groups. If controlling the FWER is a concern, then there are better options today (e.g. Tukey's HSD, which is very similar to t-tests, only using an adjusted sampling distribution, the "studentized range distribution")
I've always understood that multiple t-tests are an invalid measure of significance as the pooled variance is not measured across all combinations - hence the ANOVA plus a preselected Post Hoc test replaces multiple t-test. This is why folk are seeing significant p values for each of their t-tests but get insignificant results for their ANOVA. They doubt the ANOVA result because it doesn't provide the result their want - and that isn't the way to approach statistics. The point of the Post Hoc test where the ANOVA is significant, is that it then examines each pairwise probability and indicates which pair of means are significantly different.