As Kishore B. Ragi mentioned, the performance of a particular downscaling method is dependent on the quantity and quality of available data. Dynamical downscaling (i.e. using regional climate models) enables the incorporation of more spatial detail (e.g. topography) and physical processes (explicit description instead of parameterization) which will probably lead to more physical realism of the downscaled climatic patterns. However again, this is dependent on the data used as well.
Both for statistical and dynamical downscaling methods, it is important to use part of the data for calibration and part for validation of the method. For dynamical downscaling, this means that part of the data set is used to parameterize an appropriate bias correction method and part of the data is used to validate this method. Similarly for statistical downscaling. The validation shows how reliable statistical or dynamical downscaling of GCM output is and whether the downscaling method can be used for other conditions than the calibrated ones.
All the GCM model follow the interpolation of the data of certain avaliable point on the earth and give an estimate for any time of future for any location you are looking for. It is only indicative and need a bias correction using statistics. ...
Although both statistical and dynamical downscaling methods do have critical issues, the dynamical is more reliable if the region you chose has relatively high resolution observations. All I am saying that you have to look at your chosen place to know if the regions is observed and how they are assimilated in your downscalled model. Hope this clarifies your doubt.
As Kishore B. Ragi mentioned, the performance of a particular downscaling method is dependent on the quantity and quality of available data. Dynamical downscaling (i.e. using regional climate models) enables the incorporation of more spatial detail (e.g. topography) and physical processes (explicit description instead of parameterization) which will probably lead to more physical realism of the downscaled climatic patterns. However again, this is dependent on the data used as well.
Both for statistical and dynamical downscaling methods, it is important to use part of the data for calibration and part for validation of the method. For dynamical downscaling, this means that part of the data set is used to parameterize an appropriate bias correction method and part of the data is used to validate this method. Similarly for statistical downscaling. The validation shows how reliable statistical or dynamical downscaling of GCM output is and whether the downscaling method can be used for other conditions than the calibrated ones.
there is no general answer to that question. First, it depends on what aspects of regional climate you are interested in. There is a discussion on different sources of uncertainty where it is stated that we understand thermodynamic aspects of climate change very well, but not the response of the atmospheric circulation. Furthermore you may have, in particular in the Tropics, small-scale convection influencing the large-scale circulation. I guess your examples are from a Monsoon case, where circulation is crucial. But the performance of different GCMs may vary substantially. Also some GCMs like those in HighresMIP have a higher resolution and may perform better.
If your large-scale circulation is important, and not well represented by a GCM, you may be in trouble as downscaling typically does not enhance your large-scale circulation.
If you have a well performing GCM (or better, an ensemble of such), downscaling may indeed make sense. Here the choice of approach and method is crucial and depends again on the case. If convection is important, you may require convection permitting simulations, but this also depends on the time scale of interest (e.g. sub-daily vs. daily rainfall).
Finally, note that bias correction is not downscaling. It is extremely dangerous to apply bias correction directly to a GCM, in particular in the Tropics where convection is important.
A couple of relevant publications:
T. Shepherd, Nat. Geosci., 2014
A. Hall, Science, 2014
A. Prein et al., Rev. Geophys., 2015
D. Maraun, Curr. Clim. Change. Rep., 2016
D. Maraun et al., Nat. Clim. Change, 2016
D. Maraun & M. Widmann, Cambridge Univ. Press., 2018.
I investigated various sources of uncertainty in climate change projection for example in runoff projection. There are 4 major uncertainty sources: (1) GCMs, (2) Downscaling Methods, (3) RCPs, and (4) rainfall-runoff models (this one can be different depending on what you are working on). About GCMs, first of all, you should assess which GCM works best in your case study. You can find out that by reading various articles which have been done in the same case study. Furthermore, downscaling method is very important. You need to have high-quality observation data if you would like to work with statistical downscaling models like SDSM and Change Factor. Downscaling method is a very important uncertainty source which has been investigated in Environmental water demand assessment under climate change conditions .
As other researchers here have stated, the uncertainty depends a lot on which downscaling approache is used. Downscaling using RCMs is perhaps a better choice provided you have enough observed data.
Found an article which might help you:
Kundu, S. ; Singh, C. (2018), 'A Comparative Study of Regional Climate Models and Global Coupled Models over Uttarakhand', World Academy of Science, Engineering and Technology, International Science Index, Marine and Environmental Sciences, 12(2), 187.
Himanchal Pradesh and Uttarakhand are regions of complex topography. Therefore only a downscaling method which is based on realistic description of high-resolution topography, land-use, and soil moisture such that it resolves fine-scale meteorological patterns (valley winds, turbulence etc.) will lead to an increase in signal.