In what I know the interpretation of r is not related with the field of study. It is a statistical concept which shoulb be interpreted in the same way in all fields, based on the guidelines you decide to take into account (can be Cohen's guidelines or other).
The Question that what is a reasonably good 'r' value depends in general on the noise in the process from which the data is generated, in many scientific disciplines where error margins and fluctuations of process variable levels about their target values are high, there is generally high noise contribution in data, this is intrinsic in the nature of experimentation protocol of that discipline itself, and an 'r' value to be good enough to be indication of strong or sufficient correlation might be much smaller in these cases than those experiments where underlying process is relatively simpler, and process variables are, or can be made to remain at fixed levels, or small neighbourhood around the target settings, during the course of the experiments.
I think you can go with the interpretation that absolute r value of 0.5 or more is an indicator of reasonably good correlation(positive or negative).
Context matters in the interpretation of any statistic, so yes, I think that field would make a difference. Perhaps not as much as the specific variable set in question (and sample source).
For example, what would be a "strong" correlation between shoe size and intellectual function (however measured) among adult females? r = .20 would be pretty astonishing, I think (or, at least, far stronger than one would anticipate).
As another example, what would be a "weak" correlation between one's weight measured on a Tuesday and one's weight measured on a Wednesday, among adult males? r = .75 would be very low, relative to the day-to-day stability of weight one might anticipate (or, at least, weaker than would be expected).
However, if you're strictly characterizing correlation as the degree to which two sets of (matched) scores fit a straight line, then you would take the perspective that variables don't matter so much in your interpretation.
Correlation analysis is one of the test you can perform, but is not enough to conclud. Eventhough it shows strenght and the direction of the relation, it does not show causality relation. That is why you can not waste your time on r analysis.
Sepideh Korsavi, contributions so far to your question are quite illuminating. However, the answer you requested need to be in the context of statistical classification tasks. Obviously, factor analysis and discriminant analysis techniques become relevant in providing answers to your question.
The real issue is that although correlation is different from regression which goes into cause and effect issues, correlation is a relative term among all the variables in any data set. So setting ranges for low and high correlation is arbitrary and depends on different disciplines especially the social sciences where factor analysis and discriminant are commonly used.
As Debopam Gosh stated, it is easier to use correlation value of 0.5 as just an indicator of appreciable correlation and inappreciable correlation.
The difference now is that different disciplines set arbitrary figures for their high and low values.
All these do not undermine the relevance of statistical correlation.
Pearson's correlation r - measures the degree of linear association between two variables. The underlying relationship might be linear, curvilinear or that might be no linear relationship. So the "significance" of an r value is dependent on the sample size. If the sample size is small the r value has to be very large in order to be "significant". Cohen arbitrarily classified a continuously variable statistical parameter into three classes. It is best to understand the variables under consideration. A correlation between two traits X and Y might be very high: e.g. 0.98 and yet if Y does not vary much, even though there is a strong linear relationship with X, the correlation is not consequential. So have a look at your data set: determine if your sample is representative of some larger population you wish to make inferences about, and then look at the coefficient of variation (ratio, standard deviation/mean). Plot your data, inspect the graphical results. Fit a model (linear regression - in the case of Pearson's-r) and plot the residuals … this will tell you a lot more about the relationship between the variables than an arbitrary classification of r-values.
For a binary system a 0.9 is used in every area of study. Have that said; some companies relax it to 0.85. I have never seen it below 0.85 for a binary system A binary system can be two variables or two groups of similar variables. For example atmospheric CO2 rise is binary. The two groups are emissions of CO2 and loss of photosynthesis. Each group is multivariate. For a multivariate system generally the highest correlated variable is considered cause and effect. For example if we have 10 variables of emissions of CO2. Then we determine and rank the effect of each one. Then perform correlation regressions and see which one has the highest.