Trying to estimate a conditional probability in Bayesian setup, I think MAP is useful. trying to estimate a joint probability then MLE is useful. both method assumes that you have sufficiently large amount of data for modeling. if not then EM algorithm can help.
Keep in mind that MLE is the same as MAP estimation with a completely uninformative prior. If you have any useful prior information, then the posterior distribution will be "sharper" or more informative than the likelihood function, meaning that MAP will probably be what you want.
Theoretically, then, one could argue that you should always use MAP (possibly with an uninformative or minimally-informative prior). Given a tool that does MAP estimation you can always put in an uninformative prior to get MLE. Theoretically.
In practice, prior information is often lacking, hard to put into pdf form, or (worst of all) incorrect. It can be easier to just implement MLE in practice. As Fernando points out, MAP being better depends on there being actual correct information about the true state in the prior pdf. Whether that's true or not is situation-specific, of course. If your prior is dubious or hard to formulate, discard it (or set it to an uninformative pdf in the MAP framework, if you can do that) so as to trust the data and use MLE. If that doesn't give you a good enough answer, it's often cheaper, easier, and quicker to collect better (more informtive) data than try to mess around with expressing prior information you don't really have.
If one has enough data, then MLE and MAP will converge to same value. When data is less, it is better to apply MAP, of course prior has to be cool as others have mentioned.