I am using HYDRUS 2D to create a model for simulating moisture conditions in soil profile by varying water table depths. Which statistical method is apt for validating my model?
The value of RMSE in perfect modelling is Zero, but the accepted value of RMSE is data-dependent and arguable. Using RMSE is recommended to compare the overall accuracy of results across several models.
If you want to compare a single components of the results, use relative error in percent (e.g. Peak value of observed and simulated hydrograph).
If your target is comparison of continuous Obs/Sim values, using Nash-Sutcliffe efficiency criteria and its modified versions could be more appropriate.
Having a look on the following papers may give you some insights on the range of acceptable values of efficiency measures.
A good measure to assess the performance of a model against the measured data is the Nash–Sutcliffe efficiency coefficient.
For this, and several other measures of performance you can refer to:
Moriasi DN, Arnold JG, Van Liew MW, et al. (2007) Model Evaluation Guidelines for Systematic Quantification of Accuracy in Watershed Simulations. Trans ASABE 50:885–900. doi: 10.13031/2013.23153
I am assuming that you are validating your Hydrus model against some known soil moisture? The key to successful matching of known and modeled moisture will be the hydraulic conductivity function and parameters. Hydrus has inverse modeling functionality that will allow you to estimate this based on the data you have. It will use a least squares approach in the objective when fitting these two datasets.