Hey Colleagues,

I hope you are healthy and safe during this quarantine.

For any machine learning model, we evaluate the performance of the model based on several points, and the loss is amongst them. We all know that an ML model:

1- Underfits, when the training loss is way more significant than the testing loss.

2- Overfits, when the training loss is way smaller than the testing loss.

3- Performs very well when the training loss and the testing loss are very close.

My question is directed to the third point. I am running a DL model (1D CNN), and I have the following results: (Note that, my initial loss was 2.5)

- Training loss = 0.55

- Testing Loss = 0.65

Nevertheless, I am not quite sure if the results are acceptable. Since the training loss is a bit high (0.5). I tried to lower the training loss by giving more complexity to the model (Increasing the number of CNN layers and MLP layers); however, this is a very tricky process as whenever I increase the complexity of the architecture, the testing loss increases, and the model easily overfits.

Finally, to say that our model performed very well, should we get a low training loss (say less than 0.1) or my case is still considered good too?

I look forward to hearing from you,

Thanks and regards,

More Majdi Flah's questions See All
Similar questions and discussions