If I consult a source of True Random Numbers, and obtain a sequence of same, does the value (utility) decay with time? Or, should I put that sequence into storage, has it later the same value as when first issued?
The concept of the "decay of the random value with time" in the context of True Random Numbers isn't typically defined in the realm of randomness. True Random Numbers are supposed to be generated from some kind of processes, which are supposed to ensure their total unpredictability (independent from the effort which may be invested into it). Once you obtain a sequence of these numbers, their randomness is inherently static.
In that situation you could take a large number of random values which may look for the viewer (in real-life this method works very fine if the viewer is not too strict about it) like it would be a random sequence which would be incompatible to the definition of the True Randomness because of its (¿partially?) static character…
When stored, assuming we have a reliable and error-free storage system (thanks to technologies like ECC codes which correct minor (or even bigger if they are not too large scaled) storage errors), the sequence retains its randomness. Storage does not alter the inherent properties of these numbers, making them effectively the same when accessed later.
Therefore, the utility of True Random Numbers does not decay over time as long as the storage remains intact. The values generated remain consistent with the original sequence, preserving their 'true randomness' upon retrieval.
So, if I supply a selection criteria on members of a true random stream, there is some degree of decay respecting randomicity of the selected members; is this your view?
From a practical standpoint, true randomness is rarely a problem. In fact, in most cases, genuine randomness isn’t even necessary - a sufficiently “random” approximation is often enough, as long as it ensures that practical computation becomes infeasible. However, because you truly require absolute randomness, things become more complicated.
The fundamental paradox of true randomness is that it exists if, and only if, it does not exist. If we assume that a stream is genuinely random - setting aside metaphysical debates in philosophy - then quantum computing seems to make this possible.
However, the moment you start extracting elements from the stream and applying filters, you force those elements into fixed states, thereby disrupting their quantum characteristics. In doing so, the numbers must exist somewhere, and your selection criteria inadvertently reveal information about which numbers cannot be generated.
True randomness is meant to emerge from processes that ensure total unpredictability, regardless of the computational effort invested. It’s not enough for randomness to be merely “practically impossible” to determine; even with infinite computational power, the only feasible action should be guessing. When you introduce selection criteria, however, unpredictability is compromised—not because absolute computation is suddenly possible, but because the “guesser” lacks knowledge of your criteria. This level of unpredictability is sufficient for practical applications, which is why usual pseudo-random generators, like java.util.Random; java.security.SecureRandom (a more secure version) or the generator streams, work well.
Nevertheless, they remain pseudo-random and do not achieve true randomness.
To sum up, yes, in my opinion, applying a selection criterion to members of a truly random stream introduces some degree of degradation in their true randomness.
However, you could resolve this issue with generating the selection criteria(s) through true randomness, instead of supplying it.
Means of correction are circumvented by nature, and so any 'static' representation is necessarily time dependent. Moreover, all point mutations owing to natural causes of the 'static' representation of a True Random sequence are necessarily also True Random sequences. Modern electronic digital computation systems are patently not deterministic, however our human perceptions might suggest that they are.
To try to find decay rates regarding to True Randomness, I would propose a two-step approach:
Check and determine if True Randomness is really present in the situation and in that point of time en which we want to get the rate:
Let P(eT) represent the total probability of errors which can occur in any electronic digital computation systems, and P(eTr) denote the probability of errors resulting exclusively from True Randomness.
Since many of those errors are causally and not results of True randomness, we can see that in general P(eT) = P(eTr) is not valid; and we can categorize errors into two groups:
- "Useful" errors: Errors that contribute to the True randomness we seek.
- "Useless" errors: Causal errors that are either correctable or non-correctable but in any case irrelevant regarding to True Randomness.
Such useless errors might be for example, arising from:
- Reading data from damaged disk sectors, optical disk scratches, or temporary head crashes are not random but caused by physical defects or different harmful interferences e.g. from the user.
- Causal processor computation or storage errors (which may have different reasons like voltage fluctuations (which can have different reasons like defects or other kind of problems in the device power supply or public power grid) or material contamination (especially during the manufacturing process) can be misleading.
- A real-world example: Two technically identical processors can exhibit different behaviors due to variations in silicon purity during production. One may perform more reliably than the other, impacting error rates in ways unrelated to randomness.
Thus, a naive formula to determine randomness-related error probability might be:
(where P(ecc) accounts for ECC-correctable errors and P(predictable) for non-random error sources)
To isolate such causal error random errors, we can use histogramsand statistical correlation techniques.
For such voltage-related influences, generally not-linear consumers like inverters, dimmers, frequency converters,… are very important. And for further details on influences in the public power grid, especially those two publications look quite interesting for me:
- [Bundesnetzagentur System Stability Report](https://www.bundesnetzagentur.de/DE/Fachthemen/ElektrizitaetundGas/NEP/Strom/Systemstabilitaet/Systemstabilitaetsbericht.pdf?__blob=publicationFile&v=5)
Assuming we have a static representation (such as a predefined dataset), and we have filtered out the non-random errors from above, we must analyze how randomness persists.
Regarding to that, we may notice two things:
Normally, we may not expect any regular decay pattern (e.g. logarithmic) in True Randomness.
So, in usual, correlation tests of any kind should normally fail if True Randomness is present. If a correlation unexpectedly appears, it could indicate that an external not-random influence is altering the dataset.
Thus, to try to get a rate from our sequence, we have different options:
We can compare our instantaneous values against our initial values (preferably preserved on secure analog media like hole cards, printed records or other media which should normally not have those problems of digital storages). This comparison ensures that external influences or hidden patterns have not skewed the dataset.
A better way to get a rate from our sequence may be an entropy measure; while higher entropy suggests greater randomness. For example we can use Shannon entropy as a quantitative metric: \[H(X) = - \sum_{i} P(x_i) \log P(x_i)\], where P(x_i) is the probability of each possible outcome. A truly random dataset should have an entropy close to the theoretical maximum. But here we should also mention that this is a very helpful rate; but not sufficient criteron. If we just select the values in our sequence such kind that this fits, it is obviously not random.
Since true randomness lacks predictable structure, testing for statistical independence with tests like the Chi-Square test can help confirm whether observed data points are free from correlation.
Further we may consider cryptographic tests like the NIST Statistical Test Suite or FIPS 140-2 Tests…
I think that we have a paper, one that addresses novel ideas. I propose that we take these ideas to various AI's and synthesise from their yield a proper paper suitable to a well indexed mathematics journal.
Our ideas might best be presented in the form of a review, with strong suggestions for further work.
I am best contacted by email vial bill dot buckley at gmail dot com
Please send email to me, and I will reply with content.
I collected our discussion, together with some separately supplied queries to Perplexity. The documents should perhaps be organised a bit further, streamlined, and then presented anew to a number of AIs. Requests for references is perhaps key.