Model applications need calibration and validation of models which is time consuming. Can we omit the step model validation? Under which conditions? In which instances can only calibration be used?
I guess that if only your calibration period present good efficieny meassures, you could only explain what is happening during this period. So, you could no use this model for setting up future scenarios
Without a validation set, you are not able to test your model. As a result, you cannot confirm whether your model only applies to the conditions present in the calibration period. This means there is considerable uncertainty when using the model for future scenarios (as mentioned by Gloria), but also in looking at what the impacts of different management options would be. Basically, all you would be able to do is confirm whether the model could be calibrated to the data, and how well the model would work under calibration (also mentioned by Gloria).
Comes down to a question of what you are using the model for. Applying the model to a validation period is not really that time consuming as this is a straight forward run of the model. Most of the time would be in calibrating the model as this requires a considerable number of model runs, as well as the time required for checking the results and thinking about whether you are using the correct objective function etc. The main issue is: how much data is needed to provide a reasonable calibration of the model, and how much data you have.
Without validation, calibration have no significance. Because the relationship calibrated in one period may change in other period, that is why testing is necessary to check whether the relationship or parameters obtained in the calibrated period is robust or not, by validating this relationship in other set of data
Model validation is in reality an extension of the calibration process. Its purpose is to assure that the calibrated model properly assesses all the variables and conditions which can affect model results, and demonstrate the ability to predict field observations for periods/conditions separate from the calibration effort.
According to International Atomic Energy Agency (1982) defines a validated model as one which provides ‘‘a good representation of the actual processes occurring in a real system.’’
Vogel and Sankarasubramanian (2003) in their paper entitled "Validation of a watershed model without calibration" stated that model hypothesis testing (validation) should be performed prior to, and independent of, parameter estimation (calibration), contrary to traditional practice in which watershed models are usually validated after calibrating the model.
Thus without validation, calibration is worthless, and so is uncertainty estimation.
Klemes (1986) in "Operational testing of hydrological simulation models, Hydrol. Sci. J., 31, 13-24" introduced a hierarchical scheme for the validation of hydrologic models which tests a model’s ability to make predictions outside the calibration period (split-sample), on different basins (proxy-basin), and under different climate regimes (differential split-sample).
I agree with the paper by Vogel and Sankarasubramanian, however this should not replace the validation method described by Klemes. Rather this is an additional check of the model's suitability to the purpose. Both forms of validation are necessary for good modelling practice.
Regarding uncertainty estimation, this depends on how this is being done. If the model residuals are being used to estimate the uncertainty, then a poor model will lead to a problem with the uncertainty estimation. Further, using model residuals to estimate the uncertainty is problematic even if you have a good model, unless you appropriately penalise model complexity (e.g. using one of the various Information Criteria that have been developed - e.g. Akaike, Bayesian, Young's; though here you still need to be careful that the assumptions match the situation).
This is not necessarily the case if the uncertainties in the model inputs are propagated through the model to give an estimate of the uncertainty in the modelled outputs however. This requires understanding of the model inputs (e.g. rainfall, PET, as well as the uncertainty in the observed output that you will be calibrating against). This means that in addition to collecting the data for the catchment, you need to understand how that data was collected, and what factors might affect the data quality.
If you wish to develop a model in order to understand a physical phenomenon which you have investigated already in the laboratory, you do not need to validate your model. Since your purpose is to find the parameters affecting the phenomenon, only analysis of sensitivity would be sufficient to know the underlying factors. However, if you aim to apply a developed model in a specific area or a specific period of time, you must validate the model in order to be able to make prediction by this. If this is the case, as Barry mentioned, there is not much effort and time required for validation stage in comparison to calibration stage. Furthermore, sometimes you can calibrate your model in the validation stage even more accurately.
Calibration only makes senses if all model parameters have the physical meaning. For example, the calibration of conductance (or the linkage coefficient) using groundwater-based surface-groundwater interaction models (too many such models are available, the most used ones are the MODFLOW-based models) makes sense only if there is a less permeable layer between the surface and subsurface waters. When the surface water and groundwater are in direction connection (see the definition of direction and indirect connection in the attached ppt), the use of MODFLOW-based models amounts to an abuse of these excellent models that are supposedly applicable to the cases of indirect connection only and under the assumption that the surface water flow can be approximated with diffusive wave approaches.
Calibration only test if the input values and distribution of the aquifer parameters are reliable or not. Without model validation, the model is considered “invalid” for prediction purposes and thus limits the use of the model as an "EIA tool".
I always found this article amusing and thought provoking.
"Six (or So) Things You Can Do with a Bad Model" (1991) by James S. Hodges (link below). Operations Research 39(3) 355-365.
I have recently been working with Bayesian methods where the uncertainty of the model inputs, structure and data are considered (although not always modelled explicitly). The concepts of calibration and validation are a little different in a Bayesian framework; calibration ("inference") involves identifying the models (parameter sets) which are consistent with the data on the basis of a likelihood function (error model) that incorporates the uncertainty. Validation is not usually done: there is a view that if the distribution of calibrated model residuals matches the assumed distribution of model residuals then (the model is neither over- nor under-fitted and) we can make predictions without further ado.
My own small contribution to this is linked below. The associated journal paper is in review.
If you want to use your results in practice (resource planning, decision making, e.g.), validation is necessary to test your model's accuracy and reliability. The calibration itself is not enough to provide reliable quantitative outputs if you simulate a specific area in a given time period.
Actually if you do Bayesian calibration, sometimes you can get away with doing only calibration, since the procedure yields all the parameter sets that are consistent with the model and the data. So after you check the assumptions (e.g. residual distribution, autocorrelation) you can then do inference with that. But if you want to do prediction/extrapolation, you still need to do some kind of validation first.