Overfitting is a situation that occurs when a model learns the training set too well, taking up random fluctuations in the training data as concepts. These impact the model’s ability to generalize and don’t apply to new data.
I know adding regularization parameters like LASSO or SISSO works depending on the problem, but there are many other ways that can be found in literature.
Having a solid validation set that represents your data well can help you understand when overfitting is occurring. Knowing when to stop training will produce a model that will generalize well. Other things you can try are reducing the complexity of the model, introducing regularization, adding drop out, doing cross-validation, and definitely, data augmentation can help!
using a representative training and validation set .. good parameterization of drop out .... https://www.datascience.us/neural-net-dropout-dealing-overfitting/
in the aspect of data, data augmentation is good for avoiding overfitting. in the aspect of loss, you can add some regularization loss, like L1, L2 or any Prior loss from practical problems. in the aspect of training, early stop is good, in the aspect of model, tiny model and model ensemble are also well. I think