Dear RG-community,

I am curious how exactly the training process for a random forest model works when using the caret package in R. For the training process (trainControl()) we got the option to specifiy a resampling method (e.g. oob or boot). Obviously a random forest model is happy without any additional resampling as 1/3 of the training data is supposed to be put aside to achieve an unbiased estimate of the classification error. What if I specifiy another resampling method instead of "oob" (e.g. boot)? How is the classification error estimated? Is the oob-sampling still running in the background? If not, what data is used to be passed down the individuals trees (e.g. 100 bootstrap samples = 100 trees)?

Thanks for your help.

René

Similar questions and discussions