I can't answer that specifically for climatology, but in the social and life sciences (like my discipline, psychology), the .05 significance level (two sigma) is arbitrary but has become the accepted convention. Notably, the significance level in other fields, such as particle physics, is much lower, again by convention (six sigma).
In biological research the level of significance was historically set as 0.05. It was because of memorable work by Sir Ronald Fisher. He proposed that "deviations exceeding twice the standard deviation are (...) formally regarded as significant." 2*SD is roughly related to 0.05. Although it was formally proposed, it is being regarded less firmly now. Correct interpretation of p-value in relation to significance level is welcome. I recommend further reading: http://www.jerrydallal.com/lhsp/p05.htm and http://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503
The choice is arbitrary. The value 0.05 comes from Sir Fisher, who wasn't allowed to print tables of critical values for test statistics for several different significances. Because of copyright restrictions he could only print an excerpt, and he decided to take 0.05 as something that should work in most cases IN HIS FIELD OF RESEARCH. He was thinking about designed experiments with relatively few replicates. It was not ment to be blindly taken over to other disciplines and other kinds of data (observational studies, large clinical trials, screenings etc.). Actually, Fisher himself was interpreting his own results not w.r.t. a fixed level of significance. Depending on the context(sample size, experimental difficulties, aim of the experiment) he sometimes considered much larger p-values as "significant" and somtimes much lower p-values as "non.significant".
Testing always at the same level of significance is actuall really irrational.
@Michael:significance is not related to the SD. At least for tests of the expected value, the test statistic, t, is the ratio of the estimate to the standard error (SE) of the estimate. For a non-zero estimate you can get arbitrarily extreme t-values (and hence p-values arbitrarily close to zero) just by increasing the sample size, becose the SE (which is the denominator) decreases with sample size. The SD is independent of the sample size. But I would not even talk about the SE. Instead, I think it is more instructive and less misleading to understand a significance test as a likelihood-ratio test. You calculate the likelihood of the observed data under a full model and under a restricted model (the restriction is the "null hypothesis"). Given some assumptions about the conditional distribution of the response, one can derive the distribution of the ratio of likelihoods under the restricted model, and this can be used to formulate an expectation about a "more extreme liklihood ratio" as the one obtained from the data (i.e. the p-value). Fisher found that the general asmyptotic distribution of (log) liklihood ratios is a chi²-distribution, and for a normal conditional distribution of the response this is an F-distribution (if the models differ by a single degree of freedom, it can be shown that F = t², with t having a t-distribution). There is nothing related to SD (or SE). The SE is related to the curvature of the likelihood function at its maximum: the sharper the likelihood function "peaks", the smaller is the SE.