I need to evaluate numerical model performance by statistical parameter. RMSE or NRMSE which will be better to assess river water salinity calibration and validation model performance and why need to know with references.
There are two different methods mentioned in the doc file you have attached. I am a bit confused about the significance of the two individual method.
Would you please kindly explain me which one is more reliable method for calculating NRMSE and why? And what is the physical meaning of taking whether mean of the observed data or maximum possible range of the observed data as denominator of the NRMSE equation?
NRMSE is better indicator to assess the model performance because normalizing the RMSE (the NRMSE) may be useful to make RMSE scale-free. For instance, by transforming it in a percentage, As a result, the performance of the model is comparable for simulating parameters with different units.
From a NOAA report: 'This parameter, called “index of agreement” by Willmott, is a relative average error and bounded measure. Perfect agreement between model results and observations will yield a skill of one and complete disagreement yields a skill of zero. ' [https://tidesandcurrents.noaa.gov/ofs/publications/TBOFS_TechReport.pdf]