I have lumped hydrological rain-fall runoff simulation model. I am able to calibrate and validate it with Nas-Sutcliffe model efficiency (NSE) of 0.75 which is considered a very good model performance (there were other objective functions for post-calibration evaluation of model performance, such as RMSE). Calibrated model is then used to simulate runoff from a river catchment using the predicted input data (temperature and precipitation data). Such datasets are under conditions of deep uncertainty of "predicted future". But how can one evaluate the uncertainty created by the model itself which is not calibrated to give a precise results. How to distinguish between uncertainty of predicted future datasets and uncertainty that stems from the calibrated model? Suggestions on good relevant publications are appreciated. Thanks.

More Martin Bednář's questions See All
Similar questions and discussions