I have ran across a regression technique that is almost as good as gradient boosting, but at the same time is highly interpretable (easy to read off decision rules). They way it operates is by training gradient-boosted trees, then extracting the most useful decision rules from this forrest, and training a linear regression model on top of these rules. I.e. the results is a weighted combination of rules that can be interpreted pretty easily. I can't find this paper now though...

Similar questions and discussions