Just following wikipedia: "Randomness means lack of pattern or predictability in events. The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events, originally in the context of gambling, but later in connection with physics."
However, even an event with probability 0 may happen.
So the two concepts are related, but slightly different.
I propose more reflection on the following definition: "Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness)—this means that random strings are those that cannot be compressed."
This is because any compressibility means a kind of order or rule present in the string. random strings are those where no such rule acted for their generation.
Another point to think about is the following:
"Perception of randomness is always wrong
If we perceive randomness to be a string of letters or numbers in no order whatsoever, it would be more random for it to be lots of o's, because it is unexpected. This is one of the ideas surrounding randomness, there is no correct definition of randomness, because the definition of randomness can be the exact opposite of whatever you think it is. That also means that randomness can be whatever you think it is. This is the problem, there is no truly correct way to define randomness, rather, there is a correct way to think about it, scientifically."
So - to finish here - a true proof of randomness is when you loose a chance game, where according to probability theory, you were supposed to win!
Randomness was (in a first hitoric period) linked (or better opposed) to cause and effect.
Later it was linked to probability but the conception of randomness would depend on the view (classical, frequentist, subjective or axiomatic that you follow)
Though they are seemingly independent, they're actually not..!! for example, take a toss of coin, each outcome is equally probable, if the coin is fair. so, the corresponding probability is 1/2 for each outcome. Imagine if the coin is biased, then it would obviously different probability distribution. Similarly, take a dice, 6 faces. If somehow, the die is tampered, then obviously, "randomness" would be biased towards something, which makes the probability distribution non-uniform. Very similar scenario occurs when we describe the microcanonical and other ensembles in Statistical Physics. The former is "unbiased" and the later are "biased".
Randomness is used when you can guess based on any evidence or formulas. So it is faster way and a shortcut, but remember that probability is so close to reality and randomness is not that close to probability.
The probability distribution of random sources cannot be uniform, else the growth of shustrings (concomitant with growth of the sequence obtained from the random source) would be uniform; shustrings would tend to a single length and this is observed not to happen with random sources.
Just following wikipedia: "Randomness means lack of pattern or predictability in events. The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events, originally in the context of gambling, but later in connection with physics."
However, even an event with probability 0 may happen.
So the two concepts are related, but slightly different.
I propose more reflection on the following definition: "Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness)—this means that random strings are those that cannot be compressed."
This is because any compressibility means a kind of order or rule present in the string. random strings are those where no such rule acted for their generation.
Another point to think about is the following:
"Perception of randomness is always wrong
If we perceive randomness to be a string of letters or numbers in no order whatsoever, it would be more random for it to be lots of o's, because it is unexpected. This is one of the ideas surrounding randomness, there is no correct definition of randomness, because the definition of randomness can be the exact opposite of whatever you think it is. That also means that randomness can be whatever you think it is. This is the problem, there is no truly correct way to define randomness, rather, there is a correct way to think about it, scientifically."
So - to finish here - a true proof of randomness is when you loose a chance game, where according to probability theory, you were supposed to win!
The problem with Kolmogorov measures is that one can easily find a de Bruijn sequence of great length (say, 10E10 digits) in short time, and yet this sequence is not compressible. So, a scalable algorithm may produce an incompressible sequence, to mimic randomness for any number of symbols and any period of time that one may like. That a sequence is incompressible does not mean that the sequence is random.
I agree with William Buckley, and that is why I have cited this fragment of the Wikipedia article. It is moreover so that digits of pi or log 2 are "random" and normal, but they are done by algorithms, so they are deterministic and no way random. Also the digits of Chaitin's constant omega can be computed as far as we want, but they are "the most random ever". Moreover, computers have different programs to generate random sequences, but in fact all are deterministic algorithms and what they are generating is only pseudo-random.
All these facts are presented here only to stress fundamental difficulties in defining (and moreover, gerenarting) randomness. This is a very hart question.
Ramon, I enjoyed reading your thoughts. I appreciate your thesis regarding randomness, indeterminism and chance ...
My own thesis (you’ve heard it here first), is that randomness, indeterminism, and chance are necessary devices for creating worlds and even structures in mathematics. We would not get very far if every digit, every moment in time had to be exquisitely considered by the Creator, as Leibniz’ Principle of Sufficient Reason and Shiang style determinism requires.
This sounds reasonable to me - as this is how we can hope to explore the unexplored, the unforeseen and the unknown - given the reality that our minds, we like it or not, cannot conceive anything in its 'fullness' or 'completeness' - one could of course argue what do these terms mean! Well this could go in circles!!
My personal view on randomness is that there is nothing random in nature - nature does not act randomly - of course I can't prove it, but, at the same time, we can also not prove, beyond reasonable doubt, that nature acts randomly. The notion of randomness, I think, is our own creation as a possible explanation for what our minds are unable to fully understand and explain! And I see no problem with that. The challenge is to continue making efforts to delve deeper into finding useful patterns and develop explanation for what apparently looks random.
As a statistician, I see value in using 'randomization' in designing valid experiments. However, after reading Shiang, I now feel more comfortable to call the 'unexplained' variation as 'residual' variation, not as 'random' variation.
How about this for the relationship between randomness and probability? An event E is random if nothing logically independent of E either raises or lowers its probability. In other words, E is random if for all F and G that are logically independent of E:
So, it would seem that we should therefore understand that random means just one thing: an inability to predict the next symbol emitted by a source, given exact knowledge of all previously emitted symbols; prediction requires an algorithm and algorithms (being deterministic) are not random.
I'd just like to note that 'chance' and 'random' are somewhat different. We could take it that an event is random if there is no effective procedure to predict it (this is not the same as incompressible, which raises a different issue I won't address here). In this sense a chaotic system is random in its regions of chaos (within the system phase space as a whole). However we speak of "chance encounters", say, meeting someone we know in another part of the world when we could not have predicted it. Chance seems to me to relative to particular local constraints, but chance events need not be random in an absolute sense: each trajectory leading to the encounter might be highly predictable; it is the meeting that is not predictable from either trajectory alone. I think that this is the most commonly used sense of chance. I have applied algorithmic information theory to talk of a lack of mutual information to make this notion of chance more mathematical. One could also call it "relative randomness" as the chance encounter is not compressible using the resources available from either trajectory alone. This is a resource relative sense of randomness.
The terms "randomness", "probability" or "uncertainty" or similar words are used more and more often in all branches of science. But , unfortunately, these terms are not introduced scientifically analogously to other terms of physics, like "temperature", "length", "current", etc. As a consequence everybody develops an own interpretation of these terms. Without fixing these terms to a property of a real object, the confusion will not be stopped. As shown above, already Jakob Bernoulli has anwered all the related questions in a lasting way and I wonder why this simple and scientifically correct way is not adopted, at least, by statisticians.
Randomness has to do with given equal opportunity to all element in a well defined sample or population. However, probability is the chance that any of the events will occur. It can be said that probability depends on randomness.
@Kamoru: Not equal opportunity, for that implies maximal disorder, and maximal disorder is a characteristic of distributions that is already (see previous discussion) rejected as an expression of randomness.
Kamoru (K) Olayiwola (O) Usman 's explanation is a good example of an unscientific but very common introduction of the terms randomness and probability. Randomness is explained by the term opportunities and probability by the term chance. Thus, the undefined terms are only replaced by some other undefined terms. As already mentioned, randomness must be lokked upon as a property of processes and everybody observes this property everywhere and at any time. Probability should be looked upon as a measure used to quantify randomness similar as kilogram is used as a measure to quantify inertia of matter.
2. What is the relationship between randomness and probability? Or: can we define randomness in terms of probability?
In my comment on page 2 above, I was trying to answer the second question, not the first. I think I was perhaps too brief in my answer.
Please allow me to try again: an event or occurrence is random if it is not made more or less probable by what went before. Take for example a coin being tossed and coming up heads. The probability of this event is one-half. Take as another example, a die being rolled and coming up six. The probability of this event is one-sixth. Take as a final example, an event consisting of three tosses of a coin, one of which comes up heads. The probability of this event is 7/8.
Notice: these events have different probabilities.
I am saying that each of these is random if their probability conditional upon other events remains the same. For example, the probability of the coin coming up heads at least once in three tosses stays the same even if you conditionalize this probability on other information, for instance, information about the temperature, or the winds, or the results of previous tosses, etc. (By the way, conditional probability is not the same as causation, but this is another matter.)
John Collier is right to say that randomness and chance should be distinguished. I take randomness not to be a simple assignment of chance or probability, but the idea that an event isn't influenced by other events.
I would like to point out that the thrust of this discussion, among other things, demonstrates a general belief among researchers that random is an example of a feature of nature that is not computable.
You need a model of randomness in order to compute probabilities. E.g., for a fair coin, the expected frequencies of heads and tails are equal for the random event of tossing a coin. Knowing that, allows you to compute the probability of, say, having three heads in a row. The same applies to a normal distribution as a model of randomness.
I think the notion that any event in the universe can be totally without cause is ludicrous; all events influence and are influenced by other events. This is to say, I hold that no part of the universe is a causal.
I like Mohan Matthen's answer, but hopefully this is complementary:
Randomness is about the metaphysical nature of the world: is every event caused (shaped to happen in the way it does) by other events? Or, are some aspects of it (including its happening) arbitrary with respect to other events?
Probabilities are how we talk about *apparent* randomness. Sometimes, the probabilities we use accurately capture a real randomness in nature, and help us characterize, to the extent we can, the character of that randomness. (E.g., quantum events.) Even random events aren't totally arbitrary. They have constraints - photons in Young's double-slit experiment can go through only 2 holes, but 'randomly', meaning with 50% probability. (Ignore all the stuff about how photons can go through both holes, for this discussion.)
But some events are only apparently random, and we use probabilities to describe them, too. E.g., coin flips. If we actually knew the details of a coin flip, we could predict it quite confidently. But we don't, so all we can do is describe the 'average' behaviour of a large number of similar events ("fair coin flips").
The first kind of probability is called 'ontic', in that it is a probability that directly reflects the ontological nature of reality (as being random.) The second kind is 'epistemic', in that it expresses the degree of ignorance we have about how a particular event will unfold. If we learn more about the second kind of system, we can hope to refine our probabilities or even eliminate them (i.e., make one possibility go to probability 1.) The first kind, there are no facts to learn about that will do that; the probabilities are fundamental, not contingent upon our knowledge (or lack thereof.)
Randomness means uncertain events. Whenever there is uncertainty in events, the probability follows. So, there is a certain relationship between randomness and probability. What is the exact relationship between them we should find out.
I would say that it is impossible for us to detect (if such should exist) events that are completely independent of others since they would have no regularities at all, by hypothesis. Even coin tosses are constrained; not just anything happens or is the cause of them. A completely uncaused event would be problematic, I think, but a completely irregular event is so much more problematic. This is part of the reason why I developed the idea of relative randomness that I mentioned above, connected to the idea of chance.
Who has heard of the old story of the donkey who was placed equidistant between two bails of hay and starved to death?
This might be a good illustration of random decisions. The reason the donkey starves is that he has a rule saying he must decide on the hay that is the preferred choice. He has another rule saying he must make a choice. So in a sense, these two rules contradict. (I'm not sure they do specifically.) He then has a third rule saying the first rule is stronger than the second. Hence he can never take the step that would save his life.
In the case of donkeys that survive, their third rule would be that the second rule is stronger than the first. In the case of these donkeys, they are forced to make a choice even though neither choice is preferred. All things being equal, his choice must be randomly decided.
And so I wonder if, in quantum measurement decisions, a decision is forced by some imperative process, even though no outcome is preferred.
For probability to exist, wouldn't it be dependent on a randomness framework? Wouldn't this possible property be tied to our current knowledge of quantum states? What I couldn't answer is if probability can create randomness, if some form of hierarchical relationship exists between the two whereby one precedes the other, though this would be in conflict with quantum states as I understand them. Plus one would need to know what is the nature of a state and if in fact such a thing exists.
Interesting question. I would say that randomness can be described by probability, but also deterministic systems with lack of informations can be described by probability.
Probability has a clear definition in mathematics : this is a measure on a set such that its value for the whole set is 1. It does not assume anything about randomness or a temporal ordering of events. We can have many different probabilities on the same set, but they must follow well defined rules.
As for randomness it is relative, estimated with respect to an order or regularity expected a priori. For instance the height of people is random but less if you restrict to some subpopulations, by sex, ethincity, generation,...This gives some curious results :if you pick a up a number randomly, there is a great chance that it is an integer, and a certainty that it is a rational, even if the set of rational numbers has a zero measure in the set of real numbers.
Probability is the mathematical theory of random phenomena. Random can also mean uniformly distributed or distributed according to some probability law.
Random numbers, for example, can be entirely ordered or form some less ordered pattern, but the probability of such patterns is relatively low. So a given pattern one finds might be random or not, but if highly ordered it is less likely that it is random. One has to discover how it was formed to know if it was random, or else be able to project it arbitrarily far. All sequences (patterns) generated by an effective procedure, assuming Church's Thesis, are at best quasi-random.
If random includes uniform distribution, then a maximally disordered sequence is random. We have already in an earlier part of this discussion found agreement that a maximally disordered sequence is not random.
After thinking about the question I realized there is no such thing as probability and randomness, given a sufficient span of time. The probability of an event happening over a sufficiently long period of time that matches another event considered "random" is a given. At that point both probability and randomness are confounded. And therefore we speak of probability/randomness because both probability and randomness are experienced within a small capsule of time, that is how their expression is observed by us. So, in fact that becomes one answer - they're related because of time. Time differentiates probability from randomness and vice-versa. Plus the question itself is not worded correctly - but I forgot how it should be asked :)
@Ken: Your thesis suggests that for infinite time, probability and randomness become nothing more than limitations of experience. Can such experience (derived from limitations on time) be equated to the observation of a sub-sequence taken from a maximally disordered sequence?
The issue is not about randomness, it is about infinity. When mathematicans tried to build the theory of sets in an axiomatic fashion, they found that they needed an axion, saying that there is a set with an infinite number of elements. What that means is that mathematicans invent their own objects, this is their priviledge,but these objects are not even an idealization of real objects. But we need them to build the efficient mathematics that we are used to. This is also the difference between a computer and the human mind, we can dream of things that will never exist.
Ramon, of course we know from Turing that Borel is incorrect, since we may encode in such a real number the question regarding the halting of some third program; we know that the proper answer is that such determination, halting, cannot be made.
To Ramon, Borel's idea sounds analogous enough. I particularly agree with the block universe "thing" (and incidentally would like to know who Shiang is and his book - though I avoid clouding my own thinking). But this again points straight back to the quantum - or another parallel from an artist's perspective - read internal visual rendering of the mind - take a blank piece of paper/canvas, how do you draw a circle, a square, a horse or anything else? All lines and all connections are there to begin with. Same as the block universe idea. In this case, one could say that randomness supports the creation of a particular drawing. Or does it? Where is randomness in this example? Is it the possible joining of points in space - the blank page-canvas - to create a drawing or is it the "block all permutations" available to create any possible drawing?
To Mohan Mathen, you wrote that "John Collier is right to say that randomness and chance should be distinguished. I take randomness not to be a simple assignment of chance or probability, but the idea that an event isn't influenced by other events."
I don't see how that is possible. However remotely, seems all events are inter-related.
I'm assuming that this post pretty much says the same thing
Prob (E given F) = Prob (E given G)
Correct me if I'm wrong.
Also, back to Ramon's writing, I will take this sentence as summary "Thus the probability of the next coin is not physically dependent on the prior coin toss, but it is logically dependent on the mathematical relation “1 of 2” sides."
How can both events really be separated - how do we know that the prior coin toss, having occurred in the same space doesn't influence the next or that the observers to both toss don't affect the second or even first toss (per quantum)? And all prior break-down of probabilistic and deductive logic and abstract logic and physical causation are indeed helpful to us, humans in deconstructing and assigning everything to a little box - a necessary exercise for our minds to understand the world - but unless I ve missed something, I for one, cannot even separate an abstract idea from matter itself and even abstract ideas become fully valid objects. To be sure, the danger with abstract systems is that one can build and build and self-justify most nook and crannies.
So to Jean-Claude, I understand and was once told that all mathematics, bottom line is also just an abstraction. But if we can dream of things that will never exist, don't they exist by virtue of our having imagined them?
Full disclosure: I'm not as well read as most of you and am not an academic researcher. I merely think on my own and my particular passion is the interpretation of our perception of reality and consciousness using a cybernetic angle. And I wonder why cybernetics has not yet been elevated above even physics as the little I read on physics invariably describes the interaction of two or more systems, physical or otherwise. I'll venture to say the same is true in mathematics. I see cybernetics at play everywhere and even the question how are randomness and probability related, points to the interaction of two systems, if one can define them as such. Which always brings us back to duality and/or opposites and how one can never escape it - except in one instance, related to physics and the birth of a universe. I will if allowed continue to follow this discussion as it is fascinating. And it's a hard one and as Ramon said, we can never really answer the question!
One last thing I remembered. So my approach is very similar to the statement Ramon made about God (or whatever one wants to call it, I for one am beginning to seriously consider the Universe as a living being, an idea with echoes in some of the new physics) having built most determinations - your statement redux I was born and per my DNA cannot live 200 yrs - and then relying on randomness to take care of the details. That to me is fascinating "world view modeling".
Those are important questions. I wiil take another example. With General Relativity we are able to model the Universe as a 4 dimensional manifodl, including time. And many scientists use the model routinely. However there is something akward. Does that mean that some events exist in our future, or that events in our past have not yet happened ? This is not easy to grasp, less so than manipulating equations let appear. The only way to keep our sanity is to acknowledge that what we can know, in a scientific manner (that is by measures), is specific to each of us. This is the true meaning of the well known formulas in a change of gauge. Measures and facts open only a window, specific to each observer, on the universe, but meanwhile we can dream, or imagine, the wole universe. So there is a fundamental difference in what we can feel / measure / scientifically check, and what we are allowed to imagine. And the latter does not come, cannot come eventually from experiments. This is something totally new. An infinite set will never exist, but we can use it. This is the limit of empiricism : if we stick to the facts, if we believe that all comefrom the experiments, we are doomed. We ae right to dream, it is right to use the facts to keeo them in check, but it is wrong to let them kill our dreams.
Most of these answers are excellent. However, the simple answer is this. Random means that every item in the sample has an equal chance of being included. Without randomness, an accurate probability cannot be determined.
Adding to what James has said ... Random means that every item in the POPULATION has an equal (or a pre-specified unequal) chance of being included in the sample.
My concern is that the definition (inclusion on basis of equal probability) includes de Bruijn sequences; elsewhere we have already determined that de Bruijn sequences are not random.
Please read page - 43 of the book: An Introduction to Probability and Statistics, 2nd Edition, authored by V. K. Rohatgi and A. K. Md. E. Saleh. (John Wiley & Sons (Asia), Singapore, 2001.)
The notion of probability does not enter into the definition of randomness. This statement is clearly written there.
Randomness and probability are two differnet concepts: probaility is a measure (according to measure theory) which measures the randomness. Randomness is the object to be measured by probability. For example, probability is a mapping from randomness to the real number between 0 and 1. The similar examples are that the entropy measures the uncertanity; product of length and width measures the area of rectangle etc. Please see “A mathematical theory of ability measure” by N. Kong ets for more examples to answer this question.
If you hold in hand a fair dice that you intend to toss into the air, your expectation of an outcome say the number one is 1/6. That is your instantaneous expectation out of one throw. However, you have no way to know what the exact outcome is going to be for that given throw. In fact in real life dice throw experiments, your expected outcome of the number one appears to occur rather 'randomly', not at a regular sequence at every sixth throw. However, after a significant number of throws, as after 100 throws, the 'randomness is tamed' and you do get an average outcome of 1/6. I have used this concept in 'limited trials' oil and gas exploration. Please refer to my paper - " Exploration Chance of Success Predictions – Statistical Concepts and Realities" http://www.publish.csiro.au/EX/pdf/ASEG2016ab150