If a model is overfitted that means decent gap between training curve and testing/validation curve but somehow achieves good precision and recall score,does that still indicate that the model is decent?
Overfitting refers to a model that models the training data too well. when , it goes into new data , the prediction will be poor . If you see low error on training set and high error on test & validation set then you have likely over-fitted the model. Or, if both are low, test your model in the wild, on unseen data .
What is clear is that the applicability of a trained network is evaluated by its prediction capability (more importantly than the training phase). Hence, the results of an over-fitted network can not be presented, because the training samples are extraordinarily adopted and the testing samples represent a poor estimation capability.
Moreover, you may be asked about the solution you have considered for preventing over-fitting. In some cases that the network is susceptible to over-fitting, a third group of data, called "validation data" can be defined. it can be easily defined in ANN and ANFIS.
how do you define this model : https://www.kaggle.com/kmader/tensorflow-data-keras-for-tuberculosis
validation accuracy around 87% and accuracy around 97% which indicates decent gap between training and validation curve(indicating overfitting) but if you see classification report,it gets high precision,recall and f1 score,do you think this model will generalize well?
The model couldnt be considered best one. Gaining more accuracy and precison doesnt mean your model will work best on the external data ( outside of your dataset).Use grid search to tune the hyperparamater and apply k4-cross validation to avoid overfitting.