User:Hcberkowitz/Sandbox-FactsFromPOV
From Wikipedia, the free encyclopedia
Again, let me preface this suggestion with the knowledge that it's not going to be viable if the best-intentioned editor does it by himself/herself. It may, however, be useful if someone that can write, get peer reviews, and web-publish properly, could start a "feeder" effort.
[edit] Cui bono
This is an area where Wikipedia's WP:OR policy starts to clash with legitimate academic and intelligence analysis. If I were not limited by WP:OR, I would ask a question posed by many lawyers: cui bono, or "who benefits"? If the individual shown remorse and there is confirmation, the report might be very good. You may, however, run into something like the Tokyo tribunal after WWII: there was an agreement among the defendants (and to some extent the Occupation) to say nothing that would incriminate Hirohito.
Using cui bono again, a POV source, even outside a court, may be useful in a restricted sense: if it is speaking of something fairly objective, as to who commanded a unit, it may be OK. An apparently self-incriminating statement makes sense if it's consistent with the side's overall policy. Beyond that, looking for trends is a valid research technique, but probably not for Wikipedia.Howard C. Berkowitz (talk) 13:38, 30 April 2008 (UTC)
[edit] Does "fruit of the poisoned tree" apply to Wikipedia?
I fully recognize that there is a delicate line between WP:OR and validating a questionable source. While others will disagree with my view, I do not subscribe, in Wikipedia, to what some American lawyers call the "fruit of the poisoned tree" theory. This theory says that any material that derives from an illegal search or other violation cannot be used as evidence, no matter how distant the relationship, and no matter what independent confirmation may exist for the derivation.
To take one experience of mine on Wikipedia, I was researching a matter that pertained to a group on one extreme wing of U.S. politics, and read an equally vitriolic attack on the group by someone on the other extreme wing. An in-passing reference to a source of funding of the group caught my eye, and I searched a bit further on that variant. The first relevant retrieved article was indeed POV, but it gave just enough more specifics that I was then able to retrieve the group's tax reports, from what was generally agreed to be a neutral database. In this case, I stated several verifiable things from the tax report and from U.S. tax law, but synthesized no conclusion. To me, that was an appropriate line. In my personal opinion, there was a rather blatant tax violation: a certain status either allows a group to be tax-deductible but not to make political comments, while another lets the group say anything political, but not be tax-deductible.
So, here was a case where a clearly POV source simply gave me search strings to find something else. Other situations apply when a POV source says something about a matter appropriate to its POV. It would clearly not be appropriate to use that as sourcing on a position of the opposite POV, but it might well be accurate about someone whose views are compatible. The next questions to ask include:
-
- Was the source in a position that it reasonably could have gotten the information?
- Has it generally been accurate in reporting on its side?
- Does it have enough consistency to make comparisons among its reports, over time, meaningful in suggesting trends?
Do compare these questions with the methodology in #Ratings by Intelligence Collection Managers below, and remember that it is worthwhile to separate evaluation of the reliability of the source from the specific information being presented by the sources. Sources that are generally very accurate make mistakes, and, once in a while, a sensationalistic source gets a detail that everyone else missed. Whenever possible, when sourcing a piece of material, consider validating the source and the information separately.
In the movies, the equivalent of validation is continuity. The suspension of disbelief necessary in entertainment can fail when something jarring enters the picture, such as seeing the breath of the extras playing corpses.
Any analytic technique, analytic or intelligence, can fail. As some of you know, when the Soviet Union still existed, the relative power of the senior leadership could be derived from where they stood watching the May Day parade. The closer to the General Secretary, the more powerful, but you'd also see younger members clustered around their patron, and there could be some left-vs.-right information on status.
After one May Day, the U.S. intelligence community went somewhat crazy for several weeks, because it seemed as if all the relationships had changed. Finally, one analyst looked at the photograph under magnification, and noticed that Khruschev had a birthmark (IIRC) known to be on one side of his face, but the photograph showed it on the other. The negative had been printed backwards. As soon as a mirror image was made, the main pattern of power straightened out
[edit] Example from Wikipedia:WikiProject Sri Lanka Reconciliation
The Sri Lanka project, IMHO, is the outstanding group on Wikipedia in managing to have civil and productive interaction among people of different POV. At Wikipedia:WikiProject Sri Lanka Reconciliation#Classification of sources, they directly address the problem that most sources will have a POV, although a few are neutral.
In discussions, people often confuse "reliable" with "unbiased". Although the two are related, a source does not need to be unbiased in order to meet WP:RS. To the contrary, WP:NPOV#Bias states that "All editors and all sources have biases - what matters, is how we combine them to create a neutral article."
They go on to describe their methodology for consensus-based ratings of sources as NPOV, POV for side 1 but accurate about it, POV for side 2 but accurate about it, and generally inaccurate fore either side. As a simple example on how a POV-1 site can meet WP:RS, it could be valid when identifying who, at the moment, is the official government spokesmen; who commands various units; what deals the government has made with other countries. There are other sources that will only have propaganda about the "government" side, but are accurate about a specific local situation about the "rebel" policies and personnel.
[edit] Ratings by Intelligence Collection Managers
In the intelligence community, many have used a metaphor from T.S. Eliot's poem Gerontion,[1]a "wilderness of mirrors", to describe the constantly shifting images and reflections of images that make up the raw material collected by intelligence services. In US practice [2], a typical system, using the basic A-F and 1-6 conventions below, comes from (FM 2.22-3 Appendix B, Source and Information Reliability Matrix). Raw reports are typically given a two-part rating by the collection department, which also removes all precise source identification before sending the report to the analysts.
Code | Source Rating | Explanation |
---|---|---|
A | Reliable | No doubt of authenticity, trustworthiness, or competency; has a history of complete reliability |
B | Usually Reliable | Minor doubt about authenticity, trustworthiness, or competency; has a history of valid information most of the time |
C | Fairly Reliable | Doubt of authenticity, trustworthiness, or competency but has provided valid information in the past |
D | Not Usually Reliable | Significant doubt about authenticity, trustworthiness, or competency but has provided valid information in the past |
E | Unreliable | Lacking in authenticity, trustworthiness, and competency; history of invalid information |
F | Cannot Be Judged | No basis exists |
Code | Rating | Explanation |
---|---|---|
1 | Confirmed | Confirmed by other independent sources; logical in itself; consistent with other information on the subject |
2 | Probably True | Not confirmed; logical in itself; consistent with other information on the subject |
3 | Possibly True | Not confirmed; reasonably logical in itself; agrees with some other information on the subject |
4 | Doubtfully True | Not confirmed; possible but not logical; no other information on the subject |
5 | Improbable | Not confirmed; not logical in itself; contradicted by other information on the subject |
6 | Cannot Be Judged | No basis exists |
An "A" rating, for example, might mean a thoroughly trusted source, such as your own communications intelligence operation. That source might be completely reliable, but, if it intercepted a message that other intelligence proved was sent for deceptive purposes, the report reliability might be rated 5, for "known false". The report, therefore, would be A-5. It may also be appropriate to reduce the reliability of a human source if the source is reporting on a technical subject, and the expertise of the subject is unknown.
Another source might be a habitual liar, but gives just enough accurate information to be kept in use. Her trust rating would be "E", but if the report was independently confirmed, it would be rated "E-1".
Most intelligence reports are somewhere in the middle; a "B-2" is taken seriously. Sometimes, it is impossible to rate the reliability of source, most commonly from lack of experience with him, so an F-3 could be a reasonably probable report from an unknown source. An extremely trusted source might submit a report that cannot be confirmed or denied, so it would get an "A-6" rating.
[edit] Evaluating Sources
In a report rating, the source part is a composite, reflecting experience with the source's historical reporting, the source's direct knowledge of what is being reported, and the source's understanding of the subject. In like manner, technical collection means can have uncertainty that applies to a specific report, such as noting partial cloud cover obscuring a photographic image.
When a source is completely untested, "then evaluation of the information must be done solely on its own merits, independent of its origin." A primary source passes direct knowledge of an event on to the analyst. A secondary source provides information twice removed from the original event; one observer informs another, who then relays the account to the analyst. The more numerous the steps between the information and the source, the greater the opportunity for error or distortion.
Another part of source rating is proximity. A human source that participated in a conversation has the best proximity, but lower proximity if the source recounts what a participant told him was said. Was the source a direct observer of the event, or, if a human source, is he or she reporting hearsay? Technical sensors may directly view an event, or only infer it. A geophysical infrasound sensor can record the pressure wave of an explosion, but it may not be able to tell if a given explosion was due to a natural event or an industrial explosion. It may be able to tell that the explosion was not nuclear, since nuclear explosions are more concentrated in time.
If, for example, a human source that has provided reliable political information sends in a report on technical details of a missile system, the source's reliability for political matters only generally supports the likelihood that the same source understands rocket engineering. If that political expert speaks of rocket details that make no more sense than a low-budget science fiction movie, it can be wise to discount the report. This component of the source rating is known as its appropriateness.
[edit] Evaluating the information
Separately from the source evaluation is the evaluation of the substance of the report. The first factor is plausibility, indicating that the information is certain, uncertain, or impossible. Deception always must be considered for otherwise plausible information.
Based on the analyst's knowledge of the subject, is the information something that reasonably follows from other things known about the situation? This is the attribute of expectability. If traffic analysis put the headquarters of a tank unit at a given location, and IMINT revealed a tank unit at that location was doing maintenance typical of preparation for an attack, a separate COMINT report indicating that a senior armor officer was flying to that location, an attack can be expected.
In the previous example, the COMINT report has the support of traffic analysis and IMINT.
[edit] Confirming Reports
When it is difficult to evaluate a report, confirmation may be a responsibility of the analysts, the collectors, or both. In a large and complex intelligence community, this can be a tense matter. In the US, NSA is seen as a collection organization, with its reports to be analyzed by CIA and DIA. In a cooperative or small system, things can be less formal.
One classic example came from WWII, when the US Navy's cryptanalysts intercepted a message in the JN-25 Japanese naval cryptosystem, clearly related to an impending invasion of "AF". Analysts in Honolulu and Washington differed, however, if AF referred to a location in the Central Pacific or in the Aleutians. Midway Island was the likely Central Pacific target, but the US commanders needed to know where to concentrate their forces. Jason Holmes, a member of the Honolulu station, knew that Midway had to make or import its fresh water, so arranged for a message to be sent, via a secure undersea cable, to the Midway garrison. They were to radio a message, in a cryptosystem known to have been broken by the Japanese, that their desalination plant was broken. Soon afterwards, a message in JN-25 said that "AF" was short of fresh water, confirming the target was Midway. [3]
[edit] Electronic media
Electronic media like CNN are excellent for finding out that something may have happened, but, if it's outside a courtroom or the like, it's wise to remember that they are under time pressure and also don't have much space to explore complexity. My general approach is to use them to get alerted but try to confirm the report. One note where CNN can be excellent: they sometimes go back and do in-depth interviews, especially for Cold War events. Time Magazine is owned by the same organization as CNN, and often has much more detailed confirmation
[edit] Non-governmental and international organizations
NGOs are more of a challenge. HRW and Amnesty International have lots of valid material, but, to some extent, they may tend to want to find atrocities. They are most accurate when they say something didn't happen, and are useful but probably should be verified on the more extreme claims.
The UNGA can be OK, but it also can get caught up in politics of a certain bloc. Almost by definition, UNSC is good because it can often enforce its decisions.
[edit] Government documents
Let me mention one unusual source, which are very authoritative on primary and secondary government documents: the National Security Archives at George Washington University (http://www.gwu.edu/~nsarchiv/). Their commentary is highly reliable, but the government documents themselves have to be considered on their own merits--other than you can trust they did come from the indicated source.
As far as things like self-incrimination, see my essay.
[edit] Sequential in POV positions
Now, in what I'm going to describe, I recognize it would be OR if it ware done only for Wikipedia, and that it is less plausible for a specific event in history, such as Operation Storm, than a continuing process. I'm describing an academic and research technique called content analysis. There's a reasonable set of links on it, although my 1967 textbook, North's Content analysis, is probably quite obsolete -- very few computers available tat the time.
Long ago (during the Vietnam War), I had a job in which I surveyed, each month, Nhan Dan, the North Vietnamese party journal. Looking at any one issue, it would seem to be all propaganda. When we started comparing such things as the number of mentions of an official and how it changed from month to month, it started showing a pattern of the status of that official. Sometimes, and I recognize this would be OR if someone simply started doing such things in Wikipedia only, there is still material to be gained from a biased source. In other words, I was doing content analysis, in an academic research lab contracted to the U.S. Army.
There are many variations. That which I described deals with comparisons of a sequence of written articles. Another technique, which I once used to be surprised that an apparently biased news broadcast actually had a normal statistical distribution around a slightly POV mean, is doing quick ratings of lots of short pieces, as typical of news media. There are techniques for checking if externally known facts are described accurately in a given source, and, if they are not, is there a predictable variation?
Anybody out there in an appropriate institution? I know little of the Foundation; can they respond to specific proposals in any reasonable time?
[edit] References
- ^ Eliot, T.S., “Gerontion”, T.S. Eliot (1888–1965). Poems. 1920., <http://www.bartleby.com/199/13.html>
- ^ US Department of the Army (September 2006). FM 2-22.3 (FM 34-52) Human Intelligence Collector Operations. Retrieved on 2007-10-31.
- ^ Layton, Edwin (1985). "And I Was There: Breaking the Secrets - Pearl Harbor and Midway". William Morrow & Co.