Decision trees enhance regression analysis by effectively capturing non-linear relationships between predictors and the target variable, offering intuitive interpretations of model decisions, and demonstrating robustness to outliers. They handle mixed data types without requiring preprocessing, automatically perform feature selection, and can be extended through ensemble methods like Random Forests and gradient-boosted trees to improve predictive performance further and reduce overfitting. Overall, decision trees provide flexibility, interpretability, and reliability, making them valuable tools for regression analysis, especially in scenarios with complex relationships and diverse data types.
Decision trees are an ensemble of models. You can use the bagging methods such as random forest (i.e. perform models one by one and the accuracy will be an average of each model) or the boosted method (in this case the model will be performed based on the previous model weighting the errors made and correcting them ). However, the more these models become complex tuning their different parameters (e.g. deep in the tree, number of trees, how trees learn from previous model errors), the more it is difficult to interpret. Simple regressions are more used to understand how predictors affect your target variable and will support less multicolinearity between predictors implying the reduction of the number of predictors. Boosting models will be better to make predictions.
The right model depends on your research question and the bias-variance tradeoff i.e. the interaction between the complexity of your model, the accuracy of predictions, and the ability of the model to make predictions on new data.