I have dataset to predict customers dropout(yes,no), with 5 numerical features and 2 categorical features. I have applied a scaler to the numerical data and transformed the categorical features into dummies variables, creating 29 features. My dataset has shape of 6552 rows and 34 features. What is the recommend approach to tune the parameters of XGBClassifier, since I created the model using default values, i.e., model=XGBClassifier()? Should I use a brute-force looping the values in some parameters until I find a optimal prediction value? In this case what is recommended?

Thanks!

Similar questions and discussions