It would help to know the context of your question:
Are there theoretical reasons to expect that one of the variables is dependent on the other? Is it, for example, from an experiment which examines causal relationships between variables?
Yes, i want investigate some economical variable and find some dependency and some theory support it. And correlation show but i want know generally what we can do for discovery some dependency in two random variables
From what I see this is an observational study. In such studies it can not be determined empirically the causality between a set of explanatory variables and the response variables. But you may look for models that describe the observed relationship between these variables. If the model fits the data well, and the model is consistent with a theory that explains the relationship between variables, the dependency of the relationship may be confirmed. If instead the model that supports the theory does not fit the data well, you may question the accuracy of the theory or the characteristics of your data that may be inappropriate for validation. Then you may reform your model and try to adjust your data again and again until you get satisfied with a model.
Agree with the previous answers, speciall because a plot will give you an idea if the correlation (as a conventional regression model implies) is linear or not, which is also related to an adequate model. I a linear adjusted model fits well to your problem, then you don't need anything else to validate the linear relation. On the other hand, if it doesn't fit well, after a sequence of problems identification, then you can go to other approches, such as MI, PMI, Spearman, etc; or may be a non-linear model.
Plots can sometimes be misleading. There is the famous example of data lying on a circle, yielding a zero (linear-) correlation coefficient. There are examples of the converse as well. You need more systematic methods that can be backed up with numbers, tests, etc. You can try a bivariate polynomial expansion, i.e., E[(x^a)*(y^b)] of increasing order n=a+b. A more general way is to perform an independent component analysis -- but beware of the many different forms around (particularly for complex variables) that make different starting assumptions about the data. Still, it is purely data driven, so does not require any model validation.
L.Arnaut, I don't understand your problem. Not the plot was misleading in the example you have given - it was the interpretation of a (linear) correlation coefficient that was misleading. The plot shows that the data is lying on a circle. So you identified a pattern - exactly by looking at the plot!
Jochen, there may have been confusion here: the pertinent part here was "There are examples of the converse as well", but this is difficult to illustrate in just a few lines of text. The bottom line is that simply relying on looking at a plot to decide whether or not there is nonlinear correlation is dangerous because you do not know a priori what type of pattern you should be looking for in the data cloud. Even if you do think you have found 'something', it will still need to be hypothesis-tested. I would not agree that a plot would be the "only way" to find correlation, not the mention the preferred way. Having said that, in many practical cases a plot will indeed by very helpful, but I would caution against blanket statements as these.
Luk, I still think you are confusing *finding* and *testing* (not really, though, but it is confused in the your post). A hypothesis test never ever will "find" anything. You have to have an idea about the relation a-priori, and this you can test in order to get a hint how likely the deviations of your data from your hypothesized model (what describes the "pattern" or the "kind of correlation") can be expected given the model was correct. At best, this might indicate that your supposed model is not good, but it will not identify a new, unsupposed and probably better model.
So, again, to *find* some pattern/correlation, you litteraly have to have a look at the thing. This is - by definition - an exploratory approach. And sure is it dangerous to see patterns/correlations that are not useful because they are not "reliable" (unlikely to occur in new data). So you are right that after the mere identification of some pattern/correlation in the present data one has to go a step further. For me, the next step is actually not testing but rather thinking: maybe the identified pattern gives us some idea about reasonable underlying relationships. Is there a good (simple, comprehensive, reasonable, non-conflicting,...) explanation for such a pattern? Then, as a further step, one could go for testing, but this must be done on *new* data! As mentioned, I still find this kind of useless, UNLESS you have some competing model and you want to demonstrate that this competeing model performs worst than your new one, because the deviations of your data to the old model are unlikely given the old model was correct.