Hello everyone,

I have a query. I could find some papers that randomly divide the data set into train/test (70/30 or 90/10) at different seeds points (number of seeds: 10-15). It means for a given seed, they have a randomized selection of train/test data. The results are then averaged over the different seed points to get a robust estimate of accuracy. Is this an accepted way of doing cross-validation? (though technically it is not a 10-fold cross-validation :))

Thanks!

SR

More Sivaramakrishnan Rajaraman's questions See All
Similar questions and discussions