In my experience, "decision tree" (or classification and regression tree) methods are exploratory, model-building methods, established based on the specific data set to which they are applied. No a priori hypothesis is required. These have the tendency to be very opportunistic, so much consideration is given as to how to muster validation evidence (and thereby avoid over-fitting).
Path analysis is a confirmation method, in which a proposed/hypothesized model of how a set of variables inter-relate is evaluated to see how well it can account for relationships among observed variables in a data set. An a priori hypothsis is absolutely required.
There are published examples of how classification/regression tree methods have been used to identify decision/treatment protocols. However,t there are plenty of theory/extant research-based models which have been successfully developed for the same applications.
Obviously, one concern in such instances would be whether all relevant confounding variables had been accounted for, lest people apply models in settings or populations for which those variables do not function in a similar manner. So, again, generalizability is a concern.
For identifying models which may be worth further exploration (with additional data), cart methods (classification and regression trees) can be one such approach. Formal regression or path models could well be a means for that subsequent confirmation.
The bigger picture here is, what are useful methods for sifting among many variables in the hope of identifying useful relationships? The data mining folks have amassed a great many such options (including cart).