Are there any analog's of villegas atomless constraint to provide a unique probability representation for finite systems, by embedding infinitely many finite systems together (as if they are distinct complemenary trials), which express nonetheless a total ordering probability ordering. Or in other words, do do not require the system to be countable additive but merely finitely additive.
Apparently it entails that the system can be partitioned into countably infinitely many almost uniform partitions, are these almost equal for all disjoint distinct propositions (the sample space) or for only those partitions that are subsets of the same event A. This seems to presume that given its atomless one can always find a subset of the more probable event which is equally probable with the less probable event. Doesnt this sneak in some kind of indifference/equiprobability (even if atomlessness is derived constraint, that no matter how probable the more probable event ~A is, at some point, one of its increasing smaller subsets of subsets of subsets, equally probable with A; or that there will come to a point, or limit to a point where the subset if so small that it can be counted as having probability zero. Presumably this is a consequence of monotone continuity, or an artefact of the mathematics that ther are no infinitesimal differences, and so it must be strictly less probable by a real number amount if you judge some B judge less probable then C if C is a subset of B so the B must limit toward probability zero.
If it is be properly rigorous, would allowing for the (possibility) of infinesimal differences require that you explicitely find some subset that (the more probable event) A that is equally probable with ~A.
Or would a derivation of an archimedian condition (monotone continuity) rather than assumption of it rule it out (if one is first open to infinitesimal differences); I presume it would be part of the justification of that condition then or something of its ilk that one can always find subsets of an event A (more probable then ~A) which you judge to explicitely equiprobable with ~A, or which become arbitrarily small in probability limiting to zero.
Or is the assumptions that that these partitions stretch across the entire space, differing marginally from each other, but ulimately limiting between having full probability and almost zero probability. Is this premised on the idea (given the probability calculus or a strong representation and making use of its additivity principles) that if there are two mutually exclusive and exhausitve 'events; A, ~A, A> ~A, that if I can find some subset BA of A that is equally probable with ~A and the disjunction of both (that is B v ~A), must also be ordered . I presume all these subsevents must be orderd. So if that it comes out more probable than A, we know that A cannot being say twice as probable as ~A, so A is bounded between A:0.5 but give differing values for those assignments to the unique strong representation)