If we define generalization gap in machine learning as the difference error (or score) between train and test data, does the lasso and ridge regularization necessarily decrease the generalization gap in a set of data?

I mean, I want to compare the generalization gap when we have a full subset features and when we use a lasso or ridge regularization method.

Similar questions and discussions