You might be interested in this blog post by Cathy O'Neil. Sometimes the "interpretability" of a model is required by governmental regulations, as in the case of credit card application rejections. A decision tree is exactly the kind of model that is easy to interpret..
Hi Oliver, thanks for your reply. This post was informative about interpretability, but there so few cases about doing this kind of interpratation. I'm stiil looking for real life examples that do this.... besides my own work, of course.
I found an interesting work with good examples that shows why we do need a model for which the results should be interpretable http://alumni.cs.ucr.edu/~lexiangy/shapelet.html
In the cases I have needed classification, I usually got best results with other methods, but had to use Decision Trees exactly BECAUSE people wanted to be able to see and interpret the decision rules. So on my limited experience the answer would be "every time".
I think that people are interested in interpretation of decision trees if tress are simple enough. That is why for the sake of interpretation simpier models should be induced at the cost of poorer performance results. Boundries could reveal interesting patterns, for example, we have determined banchmarks for qulity of the raw milk in chesee production by boundries in decision tree.
In medical applications it is very important to know how a classification decision is reached. Thus, rule extraction is a key to explanation. Note also that rules can be extracted from neural networks. If you are interested to read some papers you can have a look to my publications or search on google (just write rule extraction).