Your question is quite general, perhaps more philosophical than mathematical. I am referring to "self-existence". In the beginning of the XXth century there was a controversy between constructivists (Weiss et al.) and formalists (Hilbert et al.). Formalists advocated for the "self-existence" of mathematical theories, specifically formal logic systems, while constructivists denied such "self-existence" and claimed that all mathematical constructs were human constructs (let us say models of reality). This controversy was culminated with the results by Goedel (a fellow of Hilbert) which downsized the expectatives of formalists.
My personal opinion is that all mathematics (including probability) are human models more or less adapted to our univers. We do not "discover" previous "self-existent" theories, a term that mimics the idea of Good, but we do propose models to better understand our reality. For instance, I do not think that the fruitful concept of energy is "self-existent", whichever are its mathematical or physical properties.
With respect to some more detailed aspects of your question, about relevant "group symmetry" in probability, I think that the answer is yes, although the adjective of symmetry is perhaps not completely adequate. Let me propose a simplification: consider an array of probabilities assigned to a finite partition of a probability space (discrete probabilities, or if preferred a finite sigma-additive measure), \vec{p} = (p_1,p_2,...,p_k) where \sum_i p_i=1. This kind of arrays live in the k-part simplex. J. Aitchison (1986,2003) introduced the group operation in the simplex, called perturbation, and a distance between elements. Pawlowsky-Glahn and myself (2001) and Billheimer et al (2001) showed that the k-part simplex get the structure of Euclidean space (dimension k-1) when perturbation and the Aitchison geometry are adopted. For the stated question, the important point is that perturbation
can be identified with the Bayes formula. For instance, identifying \vec{p} with prior probabilities and \vec{q} with a discrete likelihood (up to a normalizing constant; principle of likelihood). I. e. the Bayes formula becomes a commutative group operation, and introduces a duality-symmetry between probability vectors and likelihood vectors.
Moreover, this duality between probabilities and likelihood suggests that the probability axiom asserting P(\Omega)=1, is just irrelevant from the theoretical point of view, although it is useful for computation. It is substituted by accepting that vectors with proportional positive components are equivalent, as proposed in compositional data analysis. These questions can be translated to a more general setting for continuous probabilities (proper or improper) i.e. sigma-aditive measures. For a discussion of such concepts see the following article (and references therein).
@ARTICLE{EPTOB2013,
AUTHOR = {J. J. Egozcue and V. Pawlowsky-Glahn and
R. Tolosana-Delgado and M. I. Ortego and K. G. van den Boogaart},
TITLE = {Bayes spaces: use of improper distributions
and exponential families},
JOURNAL = {Revista de la Real Academia de Ciencias Exactas, F\'{\i}sicas y Naturales,
Concerning a group operation for probabilities, which are by construction constraint to the simplex, you might be interested in looking at Pawlowsky-Glahn, V. and J.J. Egozcue (2001) Geometric approach to statistical
Your question is quite general, perhaps more philosophical than mathematical. I am referring to "self-existence". In the beginning of the XXth century there was a controversy between constructivists (Weiss et al.) and formalists (Hilbert et al.). Formalists advocated for the "self-existence" of mathematical theories, specifically formal logic systems, while constructivists denied such "self-existence" and claimed that all mathematical constructs were human constructs (let us say models of reality). This controversy was culminated with the results by Goedel (a fellow of Hilbert) which downsized the expectatives of formalists.
My personal opinion is that all mathematics (including probability) are human models more or less adapted to our univers. We do not "discover" previous "self-existent" theories, a term that mimics the idea of Good, but we do propose models to better understand our reality. For instance, I do not think that the fruitful concept of energy is "self-existent", whichever are its mathematical or physical properties.
With respect to some more detailed aspects of your question, about relevant "group symmetry" in probability, I think that the answer is yes, although the adjective of symmetry is perhaps not completely adequate. Let me propose a simplification: consider an array of probabilities assigned to a finite partition of a probability space (discrete probabilities, or if preferred a finite sigma-additive measure), \vec{p} = (p_1,p_2,...,p_k) where \sum_i p_i=1. This kind of arrays live in the k-part simplex. J. Aitchison (1986,2003) introduced the group operation in the simplex, called perturbation, and a distance between elements. Pawlowsky-Glahn and myself (2001) and Billheimer et al (2001) showed that the k-part simplex get the structure of Euclidean space (dimension k-1) when perturbation and the Aitchison geometry are adopted. For the stated question, the important point is that perturbation
can be identified with the Bayes formula. For instance, identifying \vec{p} with prior probabilities and \vec{q} with a discrete likelihood (up to a normalizing constant; principle of likelihood). I. e. the Bayes formula becomes a commutative group operation, and introduces a duality-symmetry between probability vectors and likelihood vectors.
Moreover, this duality between probabilities and likelihood suggests that the probability axiom asserting P(\Omega)=1, is just irrelevant from the theoretical point of view, although it is useful for computation. It is substituted by accepting that vectors with proportional positive components are equivalent, as proposed in compositional data analysis. These questions can be translated to a more general setting for continuous probabilities (proper or improper) i.e. sigma-aditive measures. For a discussion of such concepts see the following article (and references therein).
@ARTICLE{EPTOB2013,
AUTHOR = {J. J. Egozcue and V. Pawlowsky-Glahn and
R. Tolosana-Delgado and M. I. Ortego and K. G. van den Boogaart},
TITLE = {Bayes spaces: use of improper distributions
and exponential families},
JOURNAL = {Revista de la Real Academia de Ciencias Exactas, F\'{\i}sicas y Naturales,
How can probability be self-existent as a concept in the sense that it had always been there, and then one fine day mankind came to know about its existence?
From D.V. Lindley and the other statisticians who laid the modern foundation of the Bayesian Theory of Probability, we know that probability is a consistent measure of one's beliefs about a given reference frame. Other measures exist, but they have a 1:1 mapping with probability. Odds are a good example of this.
@Juan Jose, very good work and I mean it, you have to expand it in order to cover more classical statistical concepts, just like the exponential family, that you have already done.
The motivation for asking this question was many debates in quantum mechanics that involve the concept of probability in an almost metaphysical manner. Probably another civilazation have explained the same problems by a different languange. That is the chance: To escape from our traditional and close to a 'scientifically based religion' and by keeping the minimal requirements find a new synthesis.
If we accept that Probability is just one of OURS definitions, then we can alternate our view.
Actually, my statement was not with respect to probability only. You may please search for 'Hemanta Baruah + God does play dice after all' in Google. You would then find out why and where I have made this comment about three years back.
Uncertainty with reference to probability as well as fuzziness governs Nature. Therefore I said so! First it was Einstein who said that God does not play dice. Later, it was Hawking who said that God does play dice, but the dice may be lost. I have however observed that God does play after all!
Incidentally, I have not said that the Universe is controlled by chaos!
Three types of uncertainties are currently being studied: based on probability, fuzziness and chaos. My statement was with reference to probability and fuzziness. I have not mentioned chaos in my post. But of course, the geometry of fractals and mathematics of self symmetry do explain certain botanical and zoological matters, and such things are parts of chaos theory. However, I have my own reservations regarding calling non-linear dynamical systems as uncertainties.
Even beyond the definition of fuzzy logic, there are other things associated with fuzziness, and I have found that a law of fuzziness can actually be explained with the help of two laws of randomness. In my earlier post, I have mentioned certain keywords to search for. You may please do that once. You will come to know why I have included the word 'fuzziness' in that letter. Accepting my views is upto you however! Thank you.
The scientists are introducing fruitful concepts and by the mathematical method of self-confident axiomatic paradigm they are creating beautifull indeed theories. The problem is if all those nice, simplified and so on theories can explain the reality of our local universe or, better, the rules those universes have to obey for stability reasons.
But, after a set of successful explanations of real phenomena, the next generation of scientists tend to believe that those previously defined concepts are almost self-existent. This is, I think, due to the extremely demand for specialization which strictly defences the borders of the relevant discipline. So, nobody tries to think about the fundamentals although many paradoxes have already occur...
It is better for those scientists to try altering the reality rather than changing their initial axioms.
The same I think is true for Probability: There exist so many paradoxes from many different scientific fields that can be explained only by, at least, looking at the reasons for introducing the concept of Probability.
After all we are not so enthousiastic card - players anymore!
No, fuzziness has always been there, and there are situations which are explained by fuzziness only.
You are however partially correct from one standpoint! For an explanation of that again, I would like to request you to look into the matters with reference to my earlier letter.
Probabilities have a dual nature. They are partly subjective and partly objective(i.e. self-existent) . The subjective and the objective come together to give us probabilities as we know them in mathematics.Please See my work :Are all Probabilities fundamentally quantum mechanical? "What exists by itself " is different from "what we make of it by our inevitable interaction with it" (to know even that such a thing/event /process as in question exists at all). The thing-in-itself (Kant) appears differently to different observers. Sooner or later we have to come to this unavoidable conclusion. There are no separate compartments like physics, philosophy, biology, mathematics and the like although it is convenient to restrict ourselves to a particular sphere of discourse, but it is not absolutely necessary from the point of view of investigation of Reality.
So probabilities, and for that matter anything that is perceived (phenomenal world) or conceived of (world of ideas) cannot be self-existent, but can only be a projection of the self-existent thing-in-itself onto the plane of the perception or conception of the conscious observer.
I like the projections, thus why I liked your answer.
I agree, everything we 'build' as human being is a projection of something that, probably, exists 'out there', without the need of our existence.
So, as Plato argued, that we project the Ideas to a formal Theory.
Or, as others argue, we invent rules with the only requirement to be self-consistent, in order to consecutively reproduce the same answer (experimental Science).
So, the question is about the hierarchy of our Theories/Inventions.
Firstly, the assumption that something probably exists "out there" without the need for our existence (in here!) is very well accepted in objective science, but existence/nonexistence is reflexive in character. If A exists w.r.t. B then B also exists w.r.t. A. If A truly does not exist w.r.t. B (at least a s a concept/idea, if not as a concrete object), then B also cannot be said to really exist w.r.t. A. We are part and parcel of that which exists "everywhere", since "out there" and "in here" are our human concepts again. The observer and the observed are inseparable twin facets of the same one whole that exists by itself.
Secondly, the hierarchy of theories shall continue to trickle nonstop through the intellects of scientists and philosophers alike as long as the idea of the separation between the observer and the observed remains. But this is how and why all our sciences progress day by day.
To return to your original query, The objective part of probabilities is in the frequencies of outcomes, while the subjective part is in the degrees of belief in the occurrence of an outcome, given the knowledge at the disposal of the assigner of probability.
I will second Rajat's opinion. I emphasize the duality of probability in the first lecture of my class, where I contrast the double-split experiment with a subjective belief.
I then go over classical results that show that if subjective belief satisfies certain axioms, then it can be modelled with probabilities:
Subjectivity is associated with fuzziness, and not with probability. Bayesian definition is of course different from the usual definition of probability; in this sense there is subjectivity involved.
Just because Bayesian analysis 'can do a much better job', fuzzy mathematics can not be said to be redundant. Anyway, that is your 'belief'. There can not be any debate on personal beliefs!
Bayesian principles are basically based on defining the probability of an event that had already happened in the past. It is algebraically correct. But what about the physical significance of the probability of a past event?
Further, fixation of the prior probability laws is subjective. Earlier, they used the uniform prior. Then came the fast computers into the picture, and they started using more complicated probability laws. Indeed, it was shown that uniform priors are not informative. Why we should use a particular prior, and why not another, is a point not mathematically explained. In that case, would it be proper to say that Bayesian analysis can do a 'much better' job?
Beayes' Theorem was about the probability of an event that had already happened in the past. That probability was called a-priori probability. From that came the idea of a prior probability law that is used in Bayesian matters.
Everything has an element of subjectivity, more so probability. When you said "I think, in probability there is nothing subjective!' , just pause for a moment and look at the subjective content (the "I think") of your own statement and the amount of subjectivity that has so conspicuously come up in the various opinions expressed by others in reply to your post. This is proof positive that subjectivity is inbuilt in any concept or quantity, whether we admit it or not.
But, yes, You do have a point when you say "no subjectivity", if by that you mean "inter-subjective agreement". Truly, what we call objective is what all subjects agree upon, and this common ground in perceptions/conceptions is called objectivity.
For example, given the unbiasedness of a coin i.e. for one who knows about this characteristic of the coin by previous testing, then P(heads) = 1/2. This probability is in the head of the knower regarding a the future result of a toss, although it is common habit is to ascribe it to be in the coin. However for one who has no knowledge of the unbiasedness of the coin, the P(heads) will not in general equal 1/2, depending on his/her degree of belief in the unbiasedness. May be we then have to decide upon a 2nd order probability i.e. probability that P(heads) will be assigned the value 1/2.
To ascribe properties to the perceived objects is so deeply ingrained in us that we can hardly think otherwise. This is a tremendous lacuna of the instrument called human intellect, which is programmed to operate in this manner, though with arduous practice it can be deprogrammed. What is perceived is always the result of an interaction between the perceiver and the perceived, though the qualities are always assigned to be residing in the perceived objects.
This came to the forefront of scientific investigation only after quantum mechanics developed after 1900 AD. In fact, the subjectivity encountered in QM is through the subjective property of Probabilities.
The probability therefore can be said to exist in the coin objectively, only as an approximation where inter-subjective agreement is taken for granted. But it truly exists in the head of the knower subjectively. How to quantify this subjectivity is of course a different matter altogether ---- It may be bayesian may be non-bayesian or even through negative or complex probabilities, or even through the complex probability amplitudes of Quantum theory.
Subjectivity is difficult to tackle and thus we tend to shy away from any talk of subjectivity as if it is some taboo, and we have been trained to defend objectivity tooth and nail. But this is surely uncalled for and unjustifiable. Please See the works of Charles de Finneti, Rudolph Carnap, Kyberg and Smokler on the issue of subjectivity in probabilities.
Should we banish something just because it is difficult to tackle or because leads to multifariousness (almost infinity ) of judgements. Seen with a positivist angle it adds new color to our old conceptions and is a new challenge, a new adventure worth embarking upon.
Bayesian probabilities are not restricted to statements about past events. They are often interpreted as measures of belief, and the restriction that any rational degree of belief must follow Kolmogorov's axioms is based on Ramsay's Dutch book argument.
I personally find subjective degrees of belief more acceptable than hypothetically limiting relative frequencies in a hypothetically infinite replication of a chance setup. But like most statisticians, I'll use any philosophy of probability that allows me to solve the problem at hand.
My clients are mostly intuitively Bayesian, even when using frequentist methods.
Dear Mohammad, I think that starting from top theories and going downwards is a little dangerous, since the main concept of probability was defined in very very low level, i.e. for card-playing.