Although others may have different opinions, I would make the case the effect sizes are a better alternative because they are far less dependent on sample size than p-values and confidence intervals. Confidence intervals certainly give you more information than the p-value alone; however, the width of the confidence interval is similarly a function of sample size. So, if the interpretation of the p-value is affected by a very large or very small sample, confidence intervals will similarly be affected where as effect sizes will not be (at least not to as large a degree).
I essentially agree with Daniel, but I want to stress that the CI (giving both limits, not only its width) is also an estimate for the effect size (because it is centered around the point-estimate of the effect. Therefore I'd say that a CI is an as reliable statistic as the effect size estimate as far as it cncerns the location but gives some (less reliable) information about the "quality" of the estimate. Well, sure, it's trivial that two numbers (lower and upper limit) give some more information that a single number (point estimate).
What remains a little ugly is that (width of the) CI and p-value are actually not about the effect size but about the data(!) *given* an effect size (i.e. a null hypothesis). Therefore, a comparison of effect-size estimates on the one side and "confidence-estimates" like CI and p on the other side are not really comparable.
Further, p-values (or the widths of CIs) do not have something like a reliability, since they refer to data given a hypothesis. And if the hypothesis is "true", these statistics contain no information (e.g. the p-values have a uniform distribution!).
It seems to me that the important story here is How much information do I have? If you have a great deal of information, you should be able to 'detect' small differences between the truth and an hypothesis. If your information is lacking, say from a small sample size, then you cannot 'detect' a large difference. That is what is wrong with a single p-value. It is a function of sample size and needs a power analysis or other sensitivity analysis. Using an effect size can help tell what magnitude of difference we may be seeing, and will be more useful than a p-value, but how do we interpret an effect size? Can we always visualize what it is telling us? Clearly it is better than an isolated p-value, but it might just give you a fuzzy feeling.
Whenever you can use a confidence interval, that appears to me to be the winner in the what-is-practical-and-useful-to-use competition. When a confidence interval can be used, you find out, for practical purposes, how much information your data are telling you. It is customary to point out a formally incorrect interpretation of confidence intervals, but from a practical standpoint, you will come to the same conclusions/decisions. In somewhat informal language, a confidence interval about a difference in two means, for example, can help you determine how different they might be based on the amount of "confidence" you have from the information you have collected. If you collect more data, you can be more "confident" of a given absolute difference. Your intuitive interpretation of your confidence interval will likely be practical and useful.
When you can use a confidence interval, then, that should generally be preferable, I think. If you cannot assume normality or any other standard distributional form, you could still use the Chebyshev inequality for some practical interpretation.
However, it is still usually far preferable even to just present a point estimate and its standard error than to present a p-value. It is not hard to improve, in a very practical sense, upon the often misleading/misinterpreted isolated p-value.