You can divide your training set into 10 sets, or ten folds, run each set, and find the MSE for each set, then either take the average of all or take the set with the least MSE error.
If I understand correctly your question you are looking at ways to define early stopping criteria to avoid overfitting? The most common strategies for this are based on detecting when the validation error stops decreasing. You can try googling "early stopping criteria" ....
Well, that's the general problem in ML. I guess you have different scenarios where you might not be satisfied with performance on the validation set. If the error on the validation set is much higher than the error on the training set, then you might still be over-fitting the training data and might need to regularize your model training. If the error on the validation is in general not satisfactory, than several options are available, including increasing the amount of data, better tuning the training hyper-parameters, increasing the complexity of the model (number of free parameters), or trying a different learning model. These are only very general suggestions. Much will depend on your actual application.