In climate change downscaling while using SDSM and LARS-WG, there is a step where you need to calibrate the model with the baseline dataset (say observed precipitation for 1961-1990) and then you validate the model performance for the rest of the observed data. If the performance is within the acceptable range, you can "assume" the model works well and you can use to downscale the future dataset with the same model.
Unfortunately, all the model(s) are based on some "assumptions" and therefore there is always scope for contribution in "Science".
In case you are wondering which is the best method of downscaling then you can read this paper and get some ideas..
Article Comparison of LARS-WG and SDSM for Simulation and Downscalin...
Also I noticed you downloaded AR$ dataset why do not you download the IPCC AR5 dataset?
Also you can validate your down-scaled data the same as validating model outputs, in modelling you can compare and validate the predictions of a model with observed data via some goodness of fit indices like R2, RMSE, NRMSE (%RMSE), MBE, MAE, or some other indices like Willmott index of agreement (d). Here for validating down-scaled data you can use the same approach, via comparing LARS-WG outputs with baseline data using these indices and evaluate the validity of down-scaled data.
Saber Aradpour I would recommend you to use RCM data rather than GCMs as they are available at finer spatial scale. The dataset further needs to be downscaled/bias-corrected at station scale (if you are using station based data). Here is the link on how to access RCM dataset: http://www.cordex.org/data-access/
There are a number of different ways to validate statistically downscaled output, but it should always be validated. One way is as Proloy Deb described, where you divide the historical observations into a continuous training/baseline dataset and an independent validation dataset not used to train the downscaling model, this could be 1950-1990 for training and 1991-2010 for validation (depending on the available period of historical data either period could be longer or shorter, but you should have the training period be at least half the dataset, preferably longer, such as 80%/20% training/validation).
Another way is to train on odd years and validate on even years (or vice versa), this ensures that you don't inadvertently have an abnormal climate (such as a very strong ENSO or monsoon, or something else) in the validation period, which the training could be unable to catch if nothing similar has occurred in the past, since the GCM output doesn't match observations day-to-day.
A third way, which is our preferred method, is K-fold cross validation. Here you train the downscaling model on the entire historical period, except you leave the first year out for validation. You then repeat the process, leaving the next year out, iterating these steps until you have validation output for every single year that you can then combine into a time series and compare against the raw observations. This method is more work, but it gives you much more data to train the model on and you can validate output for the entire observed time period, which is a good test since there most likely will be more variability than in a shorter validation period.
After one of these steps you can use methods described by Ahmad Reza Razavi to calculate validation errors, etc.
Saber Aradpour , what is the purpose of the downscaling? are you also trying to preserve some extreme events? Do you have an objective in mind?
For example, if you use the linear downscaling method, you will get RMSE=0, R2 =1 etc. It will fit perfectly (i.e when you compare the observed data to the model output ). But when you use the same method for your projection, you will have inaccurate results. I think, the objective matter should be the basis for your downscaling and not the error values. I am aware that SDSM uses GCM as input data, why don''t you try other approaches using RCM inputs?
Oluwafemi Adeyeri : I am going to study the impact of climate change on thermal stratification in Sabalan dam reservoir, North West of Iran. and parameters such as temperature and inflow are more important for me. It is also crucial for me to have daily data (i.e. predictions) and the accuracy of this parameters are also so critical.
I have tried RCM, I got confused how to download my required data.