I am training an MLP to detect an intrusion . without any dropout. I divided my data 60% training , 20% validation and 20% testing. my data contains 2 million rows ,Is it possible to have a validation error less than train error for a while followed by the reverse behaviour? does it cause a problem?

More Hayat Er-Rabbany's questions See All
Similar questions and discussions