Talk:Global Consciousness Project/Archive 1

From Wikipedia, the free encyclopedia

Archive This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page.

Response

The criticism section on GCP methodology is apparently well-intended, but there are mistakes that should be corrected.

1) The formal analysis is canonical, and is not subject to selection bias. Before the data are examined, a hypothesis test is fully defined: The beginning and end of the data segment to be analysed, and the statistical test to be used are specified in the GCP Hypothesis Registry. All the pre-specified analyses are reported and all are included in composite statistics.

2) An assumption that there should be a correlation of effect size with the number of people engaged is presented as a criticism. Nobody knows without more research whether this is a sound assumption, and this is one of our current research questions. Preliminary results suggest there may be a small correlation, but that effect size depends on multiple factors. As of early 2007, these analyses have shown that events with more people engaged do have a larger effect size; the difference between large N and small N events is statistically significant.

3) The last criticism is that we have no satisfactory explanation or mechanism for random devices responding to states of consciousness. The absence of a theoretical explanation for an empirical effect is not a valid criticism of the experiment, only of the brilliance and acumen of the theoreticians.

A general comment: The GCP presents exploratory and contextual analyses to supplement the formal analysis, but makes its evidentiary claims only on the latter. We clearly label the explorations as such. We offer some attempts at interpretation, but these are labeled as speculative and tentative. For example, we offer three possibilities, including chance variation, to account for the "spike" beginning four hours before the 9/11 terror attack. We do not assert "backwards causality or subconscious mass precognition". The change in the device variance is unique in the database up to that time, but we report this as a correlation, not a causal link.


I tried to add the responses to the respective points and NPOVed both a bit. Since the section is about criticism and not about responses, I had to cut drastically. Feel free to improve on this. But no "we" and no POV please. --Hob Gadling 18:06, 7 December 2005 (UTC)

I feel the section as it stood clearly violated WP:AWW. The criticism section is for criticism, but I also deleted some of the more POV critical statements (I will defend the inclusion of "bizarrely," however ^_^). If there is any false statement in that section, please delete it and post here why you have done so. I will review. Thanks. Argyrios 01:22, 23 May 2006 (UTC)

I work with the GCP project as a skeptical analyst. The criticism section is a good idea, but it contains errors and misunderstandings of the project that would be good to clear up. Also, it might be clearer stylisticly for the reader to put the criticisms in a numbered list rather than string then along with "also"'s. I find the greatest misunderstanding is that the project somehow selects data. The GCp avoids data selection of any kind. If this is clear in one's mind, a large class of objections are seen to be invalid. It is very simple: the data is not examined before an event is identified. You cannont select something if you don't examine it.

Here are some comments on the text: 1."Another criticism is that there is no objective criterion for determining whether an event is significant. " --True but misleading. Not a tenable criticism. The GCP demonstrates that statistical tests on events yield small effect sizes. This is why many events need to be tested to achieve significance. The criticism is untenable because it would argues that *all* research of a statistical nature is invalid. A *valid* criticism is that the project does not present a closed experiment with a simple hypothesis predicting a significance level to be obtained in order to reject the null hypothesis. In the jargon, the GCP is not performing a hypothesis test. The GCP has choosen not to do this. The project judges that it is not yet possible to make an adequete simple test hypothesis. But one could critize this choice.

2."Events are seemingly arbitrarily selected post-hoc, and only the data from that time period are observed. " --POV in the use of "seemingly". Either events are selected arbitrarily or they are not. Also, as Roger says (quite clearly) in his point #1 above, there is no arbitrary selection possible because test data for an event is selected before examination of the data.

3."Data from other time periods are ignored, whether or not they may display similar fluctuations. This allows opportunity for selection bias." --True and false and misleading. Not a tenable criticism. The GCP tests if *stronger than average* fluctuations correlate with identified events. Other periods *must* be ignored. Otherwise you *do* have data selection. A *valid* criticism might be that the GCP doesn't identify fluctuations to be tested against a database of events. This would be a different experiment. It would be the inverse experiment of the GCP. 4."Also, there is no correlation between degree of significance and type or magnitude of fluctuations observed. Since the GCP has posited that individual..." --False. This has not been tested yet. A *valid* criticism is that the GCP hasn't looked at an obvious question. [fyi, there is a reason for this. The small effect size requires many events to see an effect. Even more are required to see a modulation of the effect. There are not enough events yet.] A *valid* criticism is that the GCP has not figured out how to achieve enough statistical power to address many basic assumptions associated with it's hypothesis of global consciousness.

Peter Bancel June 13, 2006 pabancel

As in wikipedi, you may change what you see fit. I others accept your changes, such as editing the strings ionto points you will not be reedited. pleae be bold]. :) Procrastinating@talk2me 13:32, 14 June 2006 (UTC)

I am currently compiling sources for a complete rewrite of the section, one based completely on published articles. If anyone has any sources that may be useful, please list them below. So far, the best source I've been able to find is this paper, which purports a failure to verify the GCP's analysis of their results.

If you read Spottiswoode and May carefully, you will discover that their criticism is of post facto exploratory analyses, not of the formal analyses for 9/11 except to opine that the results are not as strong as they "should be" for such a momentous event. Roger Nelson 02:19, 3 October 2006 (UTC)

Other sources are less formal, e.g. Claus Larsen's interview with Dean Radin and the Skeptic's Dictionary essay. I would like to find more peer-reviewed journal articles. I have some questions for the "skeptic" above who "works with" the GCP: Have the results ever been replicated by anyone unaffiliated with the GCP? If so, where? If not, have there been failed attempts? If so, where are they? If not, why hasn't the scientific community taken this project seriously? Thanks for any help and sources you can provide. Argyrios 15:04, 14 June 2006 (UTC)


Answering Argyrios: Another critical paper (your reference is May&Spottiswode) is by Scargle. See GCP site under Indepedent Analyses for a link. These papers may not be peer-reviewed; check with the authors to be sure.
I don't know of any determinded efforts to reproduce the GCP results. It would be a big job to set up an independent network. The project encourages researchers to freely use the GCP data for independent analyses. May&Spottiswode, Scargle, Radin and a few others have done so for the 9/11 data. But there has not been much beyond that one event. I've done extensive analysis over the last 4 years and a paper will be out in 2007 (I hope!). Currently there is little on which you can base a wiki article, mais c'est comme ca.
The scientific community has not taken this seriously because it's too early to do so. The idea is loopy, to say the least, and the general result, although highly significant, is too vague for researchers to consider spending precious time on this. However, there is a wait and see interest that is percolating in some quarters. The AAAS ( Amer. Ass. for Advancement of Sci) regional meeting in San Diego June 20-23, 2006 has an extended session on retro-causation and the GCP presented an invited contribution. Can't get more mainstream than that. A proceedings will be out later this year.
A comment on May&Spott and Scargle. These critiques deal with the 9/11. The GCP has published a paper (Foundation of Physics Letters 2002, see ref on GCP site) on the 9/11 data because of its huge historical importance. The critiques preceeded that article and are thus not so germane. More importantly, focussing on 9/11 is somewhat a red herring. The most important result of the project so far, aside from its cumulative significance of > 4 sigma (standard deviations), is that the mean effect size is about 0.3 sigma. This means that *NO* individual event is expected to show significance. This is a key point that is always ignored and always leads to misunderstandings. A wiki article which fails to point this out does not accurately portray the project. This is mentioned in the FoPL paper and is implicit on the site results page where you find result = 4.5 sigma and #events=204 => effect size = 4.5/Sqrt[204] = 0.32 Therefore, it is a stretch to expect significance even for the 9/11 event. If for argument's sake you assume the 9/11 effect size is a huge 10x greater than average, then the formal result of 1.9 sigma is within a 95% confidence interval of this. That is, you can say the smallish 9/11 result is consistent with a huge effect for 9/11. It is also nearly consistent with the null hypothesis, as May&Spotti point out. Conclusion: single events are just too noisy to provide definitive tests.
The project's main task is to refine it's hypotheses in order to allow better testing. This is a valid critique that May&Spotti make. Our experience is that it has been a long haul to do this. Why? Again, because of small effect sizes.
I hope this is useful. -Peter

Peter Bancel June 22, 2006 pabancel

An evening with Dean Radin: Testimony

The link http://www.skepticreport.com/psychics/radin2002.htm listed in the External links section is testimony.

Almost the entire article is filled with 'he said' and 'I said'.

The reader is not presented with verification of what Radin said (or showed) and what was said to him. The article refers to many graphs the reader doesn't even see in the article.

For a critique to be worth something, we must be able to verify what occured. That way we know the critique has merit.

For example, is there a transcript or a video of the event? 69.140.78.101 04:08, 11 January 2006 (UTC)

The link clearly has merit and I confess I have difficulty understanding the thrust of your argument. Yes, it is testimony. Yes, it refers to what people said. Yes, there was reference to graphs. So?
The article is a published column taking a skeptical perspective based on the writer's personal experience and judgment, and it doesn't pretend to be anything else. To demand a transcript or video is a bit ridiculous. Argyrios 09:32, 10 January 2006 (UTC)

How can we judge the critique as accurate or not if we are not presented with a verifiable record of what is being critiqued? It would be, for example, if I write up a book review, but you have no way of seeing the book, or a movie critique but no way of watching the movie. To ask (not "demand" as you say) for a transcript or video seems a reasonable way to provide verification.

The article is "a published column", true, but on the author's own website, or on websites of organizations that the author belongs to.

I agree that the page shows a "skeptical perspective", but of something that cannot be verified. So again, I ask, how is that useful? If the author chooses to not provide it, or cannot, it is something to think about. That doesn't mean I'm for removing the link, however, just questioning how useful it is. 69.140.78.101 04:08, 11 January 2006 (UTC)

You seem to believe that eyewitness testimony is inherently worthless. It simply isn't. I don't know what else to say.
Also, you can sign your comments by typing four tildes (~) in a row. Argyrios 03:50, 11 January 2006 (UTC)

Thanks for the tip re: signing the comments. Thanks for the discussion. 69.140.78.101 04:08, 11 January 2006 (UTC)