Can a long run frequentist even use, let alone make sense of kolmogorov's generalization of the strong law of large numbers for non identically distributed independent events (where relative frequencies converge, to the sample average or expected average value of the individual probabilities in the collective, given certain conditions, 'almost surely')or have an analogue of it thereof?

By a frequentist, I mean ones one who holds that relative frequencies converge necessarily to the probability value, in either infinite collectives, or long finite collectives, or  even those accounts which even allow(or somehow derive in a non-circlaror or non-denerate way, (if such can done) for what reichenbach calls a partial limit for shorter albeit non singleton collectives)

For example consider a sequence, such that trial is described by its one true reference class, a reference class that  is maximally specific (if such can even be made sense of for a standard frequentist) :And consider an infinite ensemble of such events, each with a distinct but independent probability value (hypothetical relative frequency such that were it identically repeated that is the relative frequency value it would give). One can do this as there are uncountably many possible probability values and the ensemble is only countably infinite. In fact, I should expect it to be the norm, as its quite unlikely (unfortunate use here of a probabilistic term, sigh) that any two outcome will have the same true probability, or single case hypothetical limiting frequency.

So for each trial T in the collective, it is the only event in the collective which has that probability value that, that trial T has. Given this how can we say anything at all about the convergence of the relative frequency values in the entire collective, as there is no infinite, or large finite or even small finite sub-ensemble, for each of the probability values so that we can calculate the infinite limit as a weighed average of the limits of these ensembles? I know that according to the frequentist that trivially the probabilities will all be equal relative to this collective (supposing there is a convergent relative frequency, else they are all equally incomparable); nonetheless normally speaking you would expect some relation (at least in other accounts) between the relative frequency in this ensemble and average of the 'probabilities' of the outcome across the trials in the ensemble. I presume the variance can be made to be non infinite, supposing for example, that sample average of the probabilities is always 0.5 for every ten trials, but in virtue of different values in any given ten trials, taken from the begginning to the end (the first ten, the second ten); in some weird sense you may get a quasi-violation of the law of total probability( i call it that because it will not really be a violation of the law of total probability as the events under frequentism only have their probabilities defined relative to a collective) but even still insisting on convergence gives odd results such as that the events pre-outcome probability being dependent on its own outcome (like rabinowicks centred chances) and thus a failure of independence (which I guess would explain the non convergence to the mean) but for what appears to be no decent reason at all. And otherwise insisting on convergence to the mean also leads to odds conclusions and appears unjustified. This can also be generalized to cases even where the probabilities are identical but the trials are distinct (as is usually the case) as frequentists define identical probability by identical reference classes relative and not the other way around; and thus perhaps even to finite cases. This means that in addition we  not only have the usual issues about the reference class (is too broad so that events do not reflect the same probability and that of the event in question, or that its too narrow, so that there is no enough data) whereby the sample average may not reflect the probabilities, but that we may not even have it remotely reflecting them, not even the average of their probabilities (not even in the infinite limit, in a frequentist account where convergence is supposed to certain). This is an additional concern; for it means that the data tells not only nothing much about any individual event, but not even a rough estimate, or even tells us anything about the probability distribution of the collective as whole! Has this been considered, have a made a mistake about this and has this been considered before? Any appeal to the principle of indifference would appear to violate what frequentism is about which is

(1) absolute convergence,

(2) if there is an 'almost all' it should not be 'almost all' the measure of infinite binary sequence, or 'almost all '(combinatorially) speaking all the outcome distributions of the singular sequence, but (3) 'almost all' (the relative frequency) of infinite trial lings of such identical (although numerically distinct trial lings) will give rise to the correct outcome frequency. The 'almost all' is really back to front classico-outcome proportion view, (as opposed to almost all distinct trial lings/distinct infinite sequence trialing (among another collective of collectives perhaps), 'frequency' almost all' and especially for the purposes of a frequentist

.

More William Balthes's questions See All
Similar questions and discussions