In my opinion, only if we can perform "Randomized Controlled Trial". In this case, the two cohorts are completely symmetrical, except for the value of the variable under study, and we can correctly assess its effect on the other variable. If we are passively collecting statistics ( use "Observational Data ), nothing can be guaranteed. It may be that it is not A that influences B, but B affects A, it may be that both A and B are simultaneously affected by some third factor (not taken into account when collecting statistics), etc. I covered this in more detail in my post: Preprint How Reliable Are Causal Inferences Produced by Statistical P...
Hume made an argument that one cannot see or perceive causality. Therefore causality is established by the judgement of humans (i.e. our perception of it, causality in the world outside is simply there...or not, we do not know for sure - I personally believe that it exists). Kant picked up this argument in a elaborated manner.
Quick and dirty: We make a fundamental assumption that causality does exist when we confront experiences. Based on this assumption we start to infer relationships between contents of the experiences. This we do via logical elaborations based on assumptions. In a further step we must develop rules that describe these relations and ultimately we must reach laws. These laws we can than use themselves like a presupposition in further confrontation of experiences, this is what is called "a priori". The experiences are either in accordance with the established a priori models of the world or not. If not then the laws must be adapted.
Applied to your specific question. A correlation is a logical analysis of the appearance of to observations over the course of time, that reaches a judgement based on the axioms of Kolmogorow on the possibility that both variables coincicide or happen with a certain probability (thus Kolmogorow, since he defines axiomatically probability). The laws that have been infered by an logical analysis might account for a coincidence in time or even for their causal relationship. If they do, then the correlation can be taken as suggesting-evidence that the logical elaboration was correct. Therefore causality between to variables is established by human thinking and judgment.
However, to make a more solid evidence for a causal relation one needs a functional description that can be experimentally tested (either through measurements or simulations)
It is worth mentioning that there is now a specific field for causal inference in observational data, and, in this sense, I recommend the following book:
Pearl, J., Glymour, M., & Jewell, N. P. (2016). Causal inference in statistics: A primer. John Wiley & Sons.
In summary, if you are leading with observational data, you need to use assumptions additional to data if you want to do causal inference.
Correlation mens a likeness or relationship, whereas causation is prior to what proceeds ... those emanating from the cause and thus, a likeness or correlation in certain respects.
I think Ann is correct - if all you have is a correlation then a formal causal explanation is not possible, you're doomed. Or are you?
Correlations just don't fall out of the sky, or you don't find them just lying around. They come from an analysis, and an analysis means that someone went out a collected data, and before that they made a decision about what to measure to use to get the data. The measurement instrument probably was based on some theory, and there is probably some theoretical reason why someone decided to measure and correlate particular variables.
So what I'm trying to say is that the correlation is the end point in a series of decisions that were probably driven by theory, previous research, and hopefully some common sense. Using all this other information you could consider each of the The Bradford-Hill criteria and make some judgement about the likelihood of the relationship being causal.
So, if you say to me "Mark, I've a correlation between two variables and it is r = .45, does this represent a causal relationship?" I'll say "I don't know, and can never know". if you say to me "Mark, I've a correlation between two variables and it is r = .45, and I know lots of other things about these variables, and there's other research; does this represent a causal relationship?" I'll say "I don't know, and we can never be sure, but we sure try and thing about how plausible it is".
Nancy Cartwright has a saying, "causes in, causes out" which means that you need some assumptions about causal relations and then the data may (or may not) help you to reach others. Lots of work in the last few decades on reaching causal conclusions from non-experimental data (e.g., Pearl, Rubin, Rosenbaum), but these require more assumptions. Of course even in experimental settings assumptions are needed.
Most of the logical system are ofcourse coherent. Yet, they make some assumption within it where in extreme realism where, the model is a "true" description of reality. An int anti realism it is merley a discription of "reality", where a model can be regarded as "epistemically adequate", whereby we believe the model has some description in relation to "reality". Ofcourse there is more in between.
For example, Halperen starts with X=x whereby X represents a random variable and x a realisation of it. But this means x is obtained completely at "random" from the target population (defining the actual population). Then u represent the relation, M the model and ψ is assumed the actual causal link: (M, u)⊢ψ. Hence, aprior we need to assume ψ is the actual "true" causal link and X=x. But, I would not know if X=x and I know that with the data I work this is assumption is simply invalid. I also do not know if ψ, but this is what I want to find out. Hence, we need to believe (M, u)⊢ψ, X=x is "true", to make a causal claim. We cannot claim it to be "true" because we "found" it. But I feel these to move very close to extreme realism (pleas correct me if wrong, would be curious).