Bagging and Boosting are both ensemble methods in Machine Learning.
Both generate several training data sets by random sampling, but only Boosting determines weights for the data to tip the scales in favor of the most difficult cases.
Bagging is a method of merging the same type of predictions. Boosting is a method of merging different types of predictions. Bagging decreases variance, not bias, and solves over-fitting issues in a model. Boosting decreases bias, not variance.
In the context of machine learning (ML), the term Bagging represents a process of minimizing the variance in the prediction by generating additional data for training from the dataset using combinations with repetitions to produce multi-sets of the original data. While the term Boosting in the context of ML is a recursive technique that adjusts the weight of an observation depending on the outcome of the previous prediction/classification. That is, in a situation where an observation was misclassified, it tries to increase the weight of this observation. The concept of Boosting is mainly adopted to build better predictive models in ML.