Hello there! I have a random forest model, it has 92% of accuracy and I have generated some adversarial examples. But how can I actually measure the robustness of this model? Thanks in advance
Using k-fold cross validation validation is a good way to remove bias. What k-fold does is it folds the dataset and takes various (random in default) portions of it to train the model, and then test it on the remaining instances, which is a pretty good way to check how good the model works for different train and test samples. The convention is to use 10-fold and average their results (accuracy, f1 score, precision, loss etc.) which is more reliable.