Talk:Statistical randomness

From Wikipedia, the free encyclopedia

[edit] Global vs. local ?

I'm not exactly sure of the distinction between local and global randomness. I haven't found many references to those in theory (well, not with google at least). It seems to me that a "globally random" sequence will have some subsequences that wouldn't be considered random, but I can we say that that sequence isn't "locally random" ? Isn't it just a question of sample size ? or of sample / subsample ?

Anyway, the current use seems a bit confusing. It "seems" that "local randomness" is sometimes used to mean "statistical randomness" - or maybe something like "does not exhibit any "small scale" patterns ?

Yeah, maybe it's just a distinction on the size of the patterns being checked for. That makes sense. Flammifer 07:30, 28 August 2005 (UTC)

Local randomness, as originally defined by Kendall and Babington, refers to the lowest threshold at which equiprobability and whatever other properties you are looking for can be found. In their own random number table of 100,000 digits, it was locally random as low as intervals of 1,000 digits (though a few intervals were less "random" than others and they advised not to use them alone). The contrast would be a definition of "randomness" used by philosophers and many mathematicians which specifies that if all events are independently randomly determined, then no sequence is more probable than another. But with this you end up with the problem of induction -- there's no way to test for "randomness", and it requires complete faith that your method is generating independently random digits. This approach is often used by philosophers and psychologists when they want to make fun of the RAND statisticians who put together A Million Random Digits with 100,000 Normal Deviates; the fellows saw somewhat "patchy" distributions in their data (not patterns per se, but coming close to failing a chi-square test) and so they "re-randomized" it. The philosophers and psychologists say, "Ha, they don't even know that randomness means that they could have entire stretches of zeroes and it would still be random if the method was random!" Of course the philosophers and psychologists don't understand that by "random" the statisticians meant "locally random" in this sense. Much less that such sequences would be worthless to them, much less that a series of hetereogenous digits is more likely than one of homogenous ones. But anyway, I hope this explains a bit. If you send me an e-mail, I can send you a whole paper on the history of this. :) --Fastfission 02:58, 8 September 2005 (UTC)
At least, that's what Kendall and Smith meant when they created the concept of "local randomness" in 1939. I'm not a statistician so I can't tell you if that's how it is currently used. --Fastfission 03:03, 8 September 2005 (UTC)