Hi,

Please I need to kow how can i use cross validation to evaluate a wrapper fetaure selection method.

I already do this by spliting my data sets into three different sets :

training set , validating set (to evaluate every feature subset during the feature selection method since it is a wrapper method ) and the test set (used to do the evaluation with final extracted features).

I konw that the three sets must be different.

How can I apply cross validation in this contexte in order to avoid the overfitting and do a correcte evaluation?

for example can I use the training set and the validation set to evalutes the feature subsets and then apply the cross validation on the same training set for the finale evaluation (final extracted features ) !!!??.

Hope My question is claire enough ?.

Thank y very much.

Similar questions and discussions