I am using data augmentation and decaying learning rate after each epoch. If I don't use data augmentation but keeping callback, then my training accuracy reaches to 99.65% but not validation. Another scenario is, if I remove callback of learning rate decay but keeping data augmentation then training accuracy also improves and reaches 99% but not validation. Why is it stucking with current configuration (lr decay + Data Augmentation) ?

What could be the reason to have this problem with data augmentation?

More Muhamamd Shuaib Aslam's questions See All
Similar questions and discussions