The analysis of the correlation between two variables is always justified by your wish to analyse it ;)
It is advantagous to think about the functional relationship between the variables. This might be given or indicated by other information and reasonable, attractive models about their relationship. Only if there is nothing else known, Occhams razor suggest to use the most simple relationship we can think of, and this might be a linear relationship.
A correlation test will give you the probability to get such a large or larger correlation measure under the condition that the model is correct but the association between the variables is exactly zero. In case of the simple linear relationship and Pearson's coefficient, the residuals must have a normal distribution. This is something you can only postulate. Some data may indicate that this postulate is not reasonable. At least in such a case you should be careful, possibly thinking of a different model (non-linear relationship, ...)
Others may suggest to test the normal distribution of the residuals. IMHO this won't get you anywhere, and the reasons are discussed elswhere (also in RG).
However, when you want to test the correlation, you need to postulate a "correct" model and a "correct" distribution of the residuals independent of the data you are going to use for the test. If you use the data shown above to get an idea about the model (or kind of correlation) and the distribution, then you must NOT use the SAME data for the test, because the tested model already depends on the data for which this model should be tested. This will give you a severely biased result (too low p-values).
I know that this is frequently done and often useless. To my opinion, an estimate of the effect size (together with its precision, best given as a confidence interval), based on the avilable data, is much more instructive than a formal test. For a linear relationship the interesting effect size would be the slope.