This is a really interesting question. I found a similar question being posted here: https://www.quora.com/How-do-random-forests-and-boosted-decision-trees-compare
shamelessly copying the excellent answer from http://stats.stackexchange.com/questions/18891/bagging-boosting-and-stacking-in-machine-learning :
//begin
Bagging:
parallel ensemble: each model is built independently
aim to decrease variance, not bias
suitable for high variance low bias models (complex models)
an example of a tree based method is random forest, which develop fully grown trees (note that RF modifies the grown procedure to reduce the correlation between trees)
Boosting:
sequential ensemble: try to add new models that do well where previous models lack
aim to decrease bias, not variance
suitable for low variance high bias models : an example of a tree based method is gradient boosting
//end
.
I would just add : consider the decision tree family
if you use fully developped trees, you have a high variance / low bias family of models and you'll want to use bagging
if you use decision stumps (or short trees), you have a low variance / high bias family of models and you'll want to use boosting