Well, from my perspective, there are different sources of uncertainty and thus different methods are best to deal with each source. Two primary sources of uncertainty are (1) natural variability within the climate system (leading to poor signal to noise ratios in climate signals) and (2) structural/physical uncertainty in numerical climate models. The first source -- natural variability -- would arise even if we had a nearly perfect model of the climate system. This is just because the system is chaotic, and nature could be thought of as a single realization of the system. To account of this variability, an ensemble approach is ideal because you'd hope that the model(s) would show all the possible realizations of the system and you could then quantify the probability of given states. The second source -- model uncertainty -- is due to our inadequate (A) understanding of physical processes (and biogeochemical processes) in the climate system and (B) representation of subgridscale processes in climate models. In this case, we know the models are wrong, and we hope to both improve them and understand the possible states the models can occupy. For this, perturbed physics experiments and multi-model ensembles are good methods. In perturbed physics experiments, the physics of the model are changed (hopefully in some clever way that covers the uncertainty in parameters/processes) to build an ensemble of possible models and quantify the uncertainty within a given modeling system. This is still best illustrated by the results from climateprediction.net, but many other perturbed physics experiments have been described in the literature. One perspective on uncertainty in models is given by a review by Palmer et al. (http://www.msri.org/people/members/2008cc/Papers/Palmer_ClimateModelUncertainty_AnnRev2005.pdf).
There is also observational uncertainty, which I haven't hit on, but is substantial. For example, we do not have (despite many, many attempts) good observational constraints on many physical phenomena and climate feedbacks (like cloud feedback), much less climate sensitivity.
Well, from my perspective, there are different sources of uncertainty and thus different methods are best to deal with each source. Two primary sources of uncertainty are (1) natural variability within the climate system (leading to poor signal to noise ratios in climate signals) and (2) structural/physical uncertainty in numerical climate models. The first source -- natural variability -- would arise even if we had a nearly perfect model of the climate system. This is just because the system is chaotic, and nature could be thought of as a single realization of the system. To account of this variability, an ensemble approach is ideal because you'd hope that the model(s) would show all the possible realizations of the system and you could then quantify the probability of given states. The second source -- model uncertainty -- is due to our inadequate (A) understanding of physical processes (and biogeochemical processes) in the climate system and (B) representation of subgridscale processes in climate models. In this case, we know the models are wrong, and we hope to both improve them and understand the possible states the models can occupy. For this, perturbed physics experiments and multi-model ensembles are good methods. In perturbed physics experiments, the physics of the model are changed (hopefully in some clever way that covers the uncertainty in parameters/processes) to build an ensemble of possible models and quantify the uncertainty within a given modeling system. This is still best illustrated by the results from climateprediction.net, but many other perturbed physics experiments have been described in the literature. One perspective on uncertainty in models is given by a review by Palmer et al. (http://www.msri.org/people/members/2008cc/Papers/Palmer_ClimateModelUncertainty_AnnRev2005.pdf).
There is also observational uncertainty, which I haven't hit on, but is substantial. For example, we do not have (despite many, many attempts) good observational constraints on many physical phenomena and climate feedbacks (like cloud feedback), much less climate sensitivity.
Addition to the comment by Brian Medeiros: If you are talking about uncertainty of the future climate (medium term - climate scenarios start to differ around 2030), of course also the course of greenhouse gas emissions is of importance. These are strongly influenced by how our societies evolve. And this is affected by the public debate on climate change, which depends on projections about the future climate. In such a way reflexive loop closes, where the future depends on what we think about the future. An article by Dessai et.al refers to this as human reflexive uncertainty.
If talking about the past: Are you interested in the variability of the weather? Then you could use standard climate statistics
Dessai, Suraje, and Mike Hulme. 2004. “Does Climate Adaptation Policy Need Probabilities?” Climate Policy 4 (2) (January): 107–128. doi:10.3763/cpol.2004.0411.
Hi, without much to add to previous comments, you may want to check this (my) publication for a method to estimate uncertainty in Climate (Change) Impact.
The missing thing in it is the Earth; an object on the Earth is uncertain (boundaries, dimensions, and some other properties). So, the uncertainty starts from interactions of researcher and the object; the result of interaction is research tack (probably the first source of uncertainty); then come the methodology of solving the tack (second by the order source of uncertainty), the math model chosen by the methodology is also the source of uncertainty (better to say, it is the tool of reducing the uncertainty if we know in that way and how good the model is reflecting our fuzzy object). Then we have to communicate the results and the scientific communication is huge source of uncertainty by itself (L. Zadeh, 2005, - about the general theory of uncertainty).
Climate does not has the uncertainty, however out knowledge about it has.