The true question behind what you are asking is that of a thorough assessment of the various sources of uncertainties involved in any climate change impact study. Your question is relevant to the assessment of what is called by different names: model error, structural uncertainty, epistemic uncertainty. If two GCMs give different results for the precipitation change in 2050 or 2100 for a given region, what does it mean ? The first question you have to ask is whether this difference cannot be explained just by internal variability. The signal to noise ratio for precipitation change is low meaning that it is still extremely difficult to detect a systematic effect of anthropogenic forcing on mean precipitation changes even on large scales because interannual variability is very large for rainfall (signal to noise is higher if one is interested in extremes). Recent studies (by Clara Deser among others) have shown by using multiple (30) projections made with a single GCM (an initial condition ensemble where you vary atmospheric initial conditions by a butterfly wing to generate spread due only to internal unpredictable variability) that large uncertainties (at continental and even more so at regional or local scales) remain even for 50-year trend in the future. The second step is to identify if models also differ by their physical process representation. Indeed, you have to focus on processes that are relevant to your study. Models have different ways of representing physical processes. As it is never easy to reject a given model (one has to have good reasons for that), people very often use the so-called ensembles of opportunity (CMIP3,5) and use all or a subset of models from these ensembles to give a rough estimate of the model spread (or error). Note that, by design of the CMIP, one cannot prove that the spread of the full ensemble is an upper or lower bound of the true uncertainty. Note also that looking at CMIP3 or CMIP5, the two types of uncertainty (model error and internal variability) are always mixed.
If one wants to be pragmatic: assume that you are interested in 21st precipitation change in a given area. Look at the spread between the various CMIP5 models in term of the GCM variables you want to use in your impact study. Try to see if the model spread is greater than that due to internal variability (one can use long control simulations to do this, under some hypothesis). If you identify different classes of model behaviour and that you cannot exclude one class based on physical arguments, then select a couple of models in each class and use these to carry out your impact study.
In some cases, one can find a relationship between the model spread in future projections and the model spread in the representation of a given physical process in the CURRENT climate. If one has observation data of this process (with error bars due to internal variability or observational error), then it is possible to constrain the model spread in future projections.
For instance, if you can show that the model spread in the future projections is strongly related to ENSO teleconnections in the current climate (and that you can understand physically why it is the case), then you can use observations to get the range of realistic values for the teleconnection. You will then have the choice to exclude the models that have a biased teleconnection or to give them less weight in the multimodel combination you are going to use for your study.
There is a lot of literature out there on all these aspects. Here is one of our papers describing some of the above points with references that you might find useful.
I think that the Laurent's answer is the most documented but it is very difficult to apply this technique for an impact study researcher, less familiar with the GCM validation. A reasonable number of at least 5GCMs and an optimum one of 10 GCMs could be a meaningful assumption.
At a practical level you are interested in creating assembles of GCM to model a given region. In these cases you have a finite number of climate simulations that perform well for the variables you are interested in the given region (these simulations could be found by literature review).
One way to create your assemble is by comparing the variables of the available climate simulations with given data (records of the same variables for similar periods of time) and rank the climate simulations according to accuracy with respect of the records. Then you can create your assembles by using the best simulations. Finally you can select the best assembles by comparing against the records.
A paper by Maxino et al. (2008) which explains this technique is attached.
See also:
Corney, S. Katzfey J. McGregor J. Grose M. Bennett J. White C.J. Holz G. and Bindoff N.L. (2010), “Climate Futures for Tasmania technical report: methods and results on climate modelling”. Antarctic Climate and Ecosystems Cooperative Research Centre, Hobart, Tasmania.