There's not likely to be any acceptable value for any of the criteria: rmse, sse and r-squared. They are better interpreted and applied comparatively rather than absolutely.
I don't think there is any acceptable value for Root Mean Square Error (RMSE) and Sum of Squares due to error (SSE) but for Adjusted R-square it depend on what software was used to obtain the value if its a MINITAB software the Adjusted R-square should be above 65% for the data to be usable for any statistical evaluations
There is no fixed threshold limit for RMSE and R squared. It is always better to have RMSE as low as possible. If RMSE of train data < RMSE of test , then we overfit the model and underfit if the viceversa scenario happens
There is no fixed value for SSE, but always we are looking forward for the minimum value that reflect the learning rate, whenever the value is soo small, the learning rate and the learning quality is high.
In terms of RMSE, the lower the better. Its like setting an R2 for calibration of 0.999, RMSE values of 0.1 below is very satusfactory. For adjusted r-squared it depends on the criteria set for the mode or the test usually a value of 0.6 above is fine , while values 0.8 and above surely depics a very good model, specially when very near to 1.0.
In general, you should look at the adjusted R-squared rather than R-squared. Adjusted R-squared is an unbiased estimate of the fraction of variance explained, taking into account the sample size and number of variables.
However, there are some pitfalls also for the Adjusted R-squared. Therefore, you shoud have a broader thinking about the issue, considering the consequences of decisions make on the basis of model.
Based on a rule of thumb, it can be said that RMSE values between 0.2 and 0.5 shows that the model can relatively predict the data accurately.
In addition, Adjusted R-squared more than 0.75 is a very good value for showing the accuracy. In some cases, Adjusted R-squared of 0.4 or more is acceptable as well.
RMSE has the same unit than the predicted values, so, it has to be understood that we have to take a look at the importance of the RMSE in comparaison with the predicted values. How we could know if it is good or not? we have to estimate the Scatter Index, which simply the RMSE divided by the average value of the observed value. SI= (RMSE/average observed value)*100%. it will be more easy to know if it is a good model or not. If SI < 10% is a good model, SI