In general, the optimized model gives more accurate results than the local model but in my case, the traditional regression model shows a higher R2 value than the optimized model? Thank you in advance for your kind feedback.
I think the model it self is not optimized perfectly, you can set the hyperparameters carefully, and let the algorithm giving you the optimize parameters and then use the optimized parameters in your specific ML such as (XGBoost) model.
I agree with Md Nasir Uddin suggestion. Based on my experience, grid search cross-validation always give me the best hyperparameter combinations that reduce the error function the most (e.g RMSE or MSE for regression task and accuracy for classification task). I also suggest you to compare RMSE and R2 on both train and test datasets as well. A best-performed model will have small error difference between train and test datasets. I'm not sure the detailed regression task that you performed, but there is another possibility that the evaluation metrics on training dataset decreased after optimization to decrease the error difference between train and test datasets. In that case the optimization that you performed did a good job.
However, I am using a metaheuristic algorithm to optimise my model, such as GWO. I even gave the best hyperparameters, but it showed the best results on some datasets and some not.
Niaz Muhammad Shahani I think I got your difficulty. Indeed, GWO is a good metaheuristic approach as it can avoid local minima in many cases. However, the fact that the best hyperparameters that you inputted showed best results on some datasets and some not, it means that those hyperparameters only performed well on certain range of data patterns. No matter how hard we try to establish a general model, the limitation always exists. How about investigate the data patterns that performed well and not performed well? Then transform the data pattern that gave you bad performance to have similar distribution like the data pattern that performed well in the first place. Sorry thou, I can't help much as I have not much experience with metaheuristic algorithm.
Mostly for the Fintess or objective function the results are varing. GWO ,PSO ,GA, ABC all are using for optimize and get the optimized input and output value for min and max problem.So if your fitness function is a min/max function you have to reconsider the fitness function again.
Besides if you are optimize the model using GWO, i do not have idea on it.
GridSearchCV is a technique to search through the best parameter values from the given set of the grid of parameters. It is basically a cross-validation method. the model and the parameters are required to be fed in. Best parameter values are extracted and then the predictions are made.
One interesting suggestion is to tune metaheuristic parameters using a meta-metaheuristic approach. This concept is introduced in section 1.6 of the book "Metaheuristics: From Design to Implementation" (see link).
Another option, BONESA is an open source, user-friendly interface for tuning the numerical parameters of metaheuristics. This package includes a multi-objective parameter tuning algorithm and visualizations of the performance landscape.