Statistical randomness
From Wikipedia, the free encyclopedia
A numeric sequence is said to be statistically random when it contains no recognizable patterns or regularities; sequences such as the results of an ideal die roll, or the digits of Pi (as far as we can tell) exhibit statistical randomness.
Statistical randomness does not necessarily imply "true" randomness, i.e., objective unpredictability. Pseudorandomness is sufficient for many uses.
A distinction is sometimes made between global randomness and local randomness. Most philosophical conceptions of randomness are "global" — they are based on the idea that "in the long run" a sequence would look truly random, even if certain sequences would not look random (in a "truly" random sequence of near-infinite length, for example, it is probable that there would be long sequences of nothing but zeros, though on the whole the sequence might be "random"). "Local" randomness refers to the idea that there can be minimum sequence lengths in which "random" distributions are approximated. Long stretches of the same digits, even those generated by "truly" random processes, would diminish the "local randomness" of a sample (it might only be locally random for sequences of 10,000 digits; taking sequences of less than 1,000 might not appear "random" at all, for example).
A sequence exhibiting a pattern is not thereby proved not statistically random. According to principles of Ramsey theory, sufficiently large objects must necessarily contain a given structure ("complete disorder is impossible").
Legislation concerning gambling imposes certain standards of statistical randomness to slot machines.
Contrast with algorithmic randomness.
Contents |
[edit] Tests
The first tests for random numbers were published by M.G. Kendall and Bernard Babington Smith in the Journal of the Royal Statistical Society in 1938. They were built on statistical tools such as Pearson's chi-square test which were developed in order to distinguish whether or not experimental phenomena matched up with their theoretical probabilities (Pearson developed his test originally by showing that a number of dice experiments by W.F.R. Weldon did not display "random" behavior).
Kendall and Smith's original four tests were hypothesis tests, which took as their null hypothesis the idea that each number in a given random sequence had an equal chance of occurring, and that various other patterns in the data should be also distributed equiprobably.
- The frequency test, was very basic: checking to make sure that there were roughly the same number of 0s, 1s, 2s, 3s, etc.
- The serial test, did the same thing but for sequences of two digits at a time (00, 01, 02, etc.), comparing their observed frequencies with their hypothetical predictions were they equally distributed.
- The poker test, tested for certain sequences of five numbers at a time (aaaaa, aaaab, aaabb, etc.) based on hands in the game poker.
- The gap test, looked at the distances between 0s (00 would be a distance of 0, 010 would be a distance of 1, 02250 would be a distance of 3, etc.).
If a given sequence was able to pass all of these tests within a given degree of significance (generally 5%), then it was judged to be, in their words "locally random". Kendall and Smith differentiated "local randomness" from "true randomness" in that many sequences generated with truly random methods might not display "local randomness" to a given degree — very large sequences might contain many rows of a single digit. This might be "random" on the scale of the entire sequence, but in a smaller block it would not be "random" (it would not pass their tests), and would be useless for a number of statistical applications.
As random number sets became more and more common, more tests, of increasing sophistication were used. Some modern tests plot random digits as points on a three-dimensional plane, which can then be rotated to look for hidden patterns. In 1995, the statistician George Marsaglia created a set of tests known as the Diehard tests which he distributes with a CD-ROM of 5 billion pseudorandom numbers.
Pseudorandom number generators require tests as exclusive verifications for their "randomness" as they are decidedly not produced by "truly random" processes, but rather by deterministic algorithms. Over the history of random number generation, many sources of numbers thought to appear "random" under testing have later been discovered to be very non-random when subjected to certain types of tests. The notion of quasi-random numbers was developed in order to circumvent some of these problems, though pseudorandom number generators are still extensively used in many applications (even ones known to be extremely "non-random"), as they are "good enough" for most applications.
Other tests :
- Information entropy
- Autocorrelation test
- KS test
- Maurer's Universal Statistical Test.
[edit] See also
[edit] References
- M.G. Kendall and B. Babington Smith, "Randomness and Random Sampling Numbers," Journal of the Royal Statistical Society 101:1 (1938), 147-166.