While training a deep learning model I generally consider the training loss, validation loss and the accuracy as a measure to check overfitting and under fitting. And different researchers have different opinion on this. Some argue that training loss > validation loss is better while some say that validation loss > training loss is better. For example in the attached screenshot how to decide if the model is overfitting or underfitting. Is there a rule of thumb or an intuition which can help while deciding early stopping. Or any research study which talks about a similar trade-off?

More Harsh Panwar's questions See All
Similar questions and discussions