Statistical syllogism

A statistical syllogism (or proportional syllogism or direct inference) is a non-deductive syllogism. It argues, using inductive reasoning, from a generalization true for the most part to a particular case.

Introduction

Statistical syllogisms may use qualifying words like "most", "frequently", "almost never", "rarely", etc., or may have a statistical generalization as one or both of their premises.

For example:

  1. Almost all people are taller than 26 inches
  2. Gareth is a person
  3. Therefore, Gareth is almost certainly taller than 26 inches

Premise 1 (the major premise) is a generalization, and the argument attempts to draw a conclusion from that generalization. In contrast to a deductive syllogism, the premises logically support or confirm the conclusion rather than strictly implying it: it is possible for the premises to be true and the conclusion false, but it is not likely.

General form:

  1. X proportion of F are G
  2. I is an F
  3. I is a G

In the abstract form above, F is called the "reference class" and G is the "attribute class" and I is the individual object. So, in the earlier example, "(things that are) taller than 26 inches" is the attribute class and "people" is the reference class.

Unlike many other forms of syllogism, a statistical syllogism is inductive, so when evaluating this kind of argument it is important to consider how strong or weak it is, along with the other rules of induction (as opposed to deduction). In the above example, if 99% of people are taller than 26 inches, then the probability of the conclusion being true is 99%.

Two dicto simpliciter fallacies can occur in statistical syllogisms. They are "accident" and "converse accident". Faulty generalization fallacies can also affect any argument premise that uses a generalization. A problem with applying the statistical syllogism in real cases is the reference class problem: given that a particular case I is a member of very many reference classes F, in which the proportion of attribute G may differ widely, how should one decide which class to use in applying the statistical syllogism?

The importance of the statistical syllogism was urged by Henry E. Kyburg, Jr., who argued that all statements of probability could be traced to a direct inference. For example, when taking off in an airplane, our confidence (but not certainty) that we will land safely is based on our knowledge that the vast majority of flights do land safely.

The widespread use of confidence intervals in statistics is often justified using a statistical syllogism, in such words as "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time."[1] The inference from what would mostly happen in multiple samples to the confidence we should have in the particular sample involves a statistical syllogism.[2]

History

Ancient writers on logic and rhetoric approved arguments from "what happens for the most part". For example Aristotle writes "that which people know to happen or not to happen, or to be or not to be, mostly in a particular way, is likely, for example, that the envious are malevolent or that those who are loved are affectionate."[3]

The ancient Jewish law of the Talmud used a "follow the majority" rule to resolve cases of doubt.[4]

From the invention of insurance in the 14th century, insurance rates were based on estimates (often intuitive) of the frequencies of the events insured against, which involves an implicit use of a statistical syllogism. John Venn pointed out in 1876 that this leads to a reference class problem of deciding in what class containing the individual case to take frequencies in. He writes, “It is obvious that every single thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite number of different classes of things”, leading to problems with how to assign probabilities to a single case, for example the probability that John Smith, a consumptive Englishman aged fifty, will live to sixty-one.[5]

In the 20th century, clinical trials were designed to find the proportion of cases of disease cured by a drug, in order that the drug can be applied confidently to an individual patient with the disease.

Problem of induction

The statistical syllogism was used by Donald Cary Williams and David Stove in their attempt to give a logical solution to the problem of induction. They put forward the argument, which has the form of a statistical syllogism:

  1. The great majority of large samples of a population approximately match the population (in proportion)
  2. This is a large sample from a population
  3. Therefore, this sample approximately matches the population

If the population is, say, a large number of balls which are black or white but in an unknown proportion, and one takes a large sample and finds they are all white, then it is likely, using this statistical syllogism, that the population is all or nearly all white. That is an example of inductive reasoning.

Legal examples

Statistical syllogisms may be used as legal evidence but it is usually believed that a legal decision should not be based solely on them. For example, in L. Jonathan Cohen's "gatecrasher paradox", 499 tickets to a rodeo have been sold and 1000 people are observed in the stands. The rodeo operator sues a random attendee for non-payment of the entrance fee. The statistical syllogism:

  1. 501 of the 1000 attendees have not paid
  2. The defendant is an attendee
  3. Therefore, on the balance of probabilities the defendant has not paid

is a sound one, but it is felt to be unjust to burden a defendant with membership of a class, without evidence that bears directly on the defendant.[6]

See also

References

  1. Cox DR, Hinkley DV. (1974) Theoretical Statistics, Chapman & Hall, p49, 209
  2. Franklin, J., (1994) Resurrecting logical probability, Erkenntnis, 55, 277–305.
  3. Aristotle, Prior Analytics 70a4-7, other examples in J. Franklin, The Science of Conjecture: Evidence and Probability Before Pascal (Baltimore, 2001), 113, 116, 118, 200.
  4. Franklin, Science of Conjecture, 172–5.
  5. J. Venn,The Logic of Chance (2nd ed, 1876), 194.
  6. L. J. Cohen, (1981) Subjective probability and the paradox of the gatecrasher, Arizona State Law Journal, p. 627.