I have a question about the underfitting and the overfitting in ML.
Now, when the accuracy of the training is more than the accuracy of testing, this is called the overfitting, in this case, we can not trust this model in the classification or clustering , but when the accuracy of the testing is more than the accuracy of training, what can be called this case scientifically and is it considered positive in the work of the model or not?
Addition to that, how we can to detect the underfitting from knowing only the accuracy of training and testing?
Thanks in advance.