Dempster-Shafer theory

From Wikipedia, the free encyclopedia

The Dempster-Shafer theory is a mathematical theory of evidence[1] based on belief functions and plausible reasoning, which is used to combine separate pieces of information (evidence) to calculate the probability of an event. The theory was developed by Arthur P. Dempster and Glenn Shafer.

Contents

[edit] Consider two possible gambles

The first gamble is that we bet on a head turning up when we toss a coin that is known to be fair. Now consider the second gamble, in which we bet on the outcome of a fight between the world's greatest boxer and the world's greatest wrestler. Assume we are fairly ignorant about martial arts and would have great difficulty making a choice of who to bet on.

Many people would feel more unsure about taking the second gamble, in which the probabilities are unknown, rather than the first gamble, in which the probabilities are easily seen to be one half for each outcome. Dempster-Shafer theory allows one to consider the confidence one has in the probabilities assigned to the various outcomes.

[edit] Formalism

Let X be the universal set: the set of all states under consideration. The power set, \mathbb P(X)\,\!, is the set of all possible sub-sets of X, including the empty set. For example, if:

X = \left \{ a, b \right \} \,\!

then

\mathbb P(X) = \left \{ \varnothing, \left \{ a \right \}, \left \{ b \right \}, X \right \}. \,\!

The elements of the power set can be taken to represent propositions that one might be interested in, by containing all and only the states in which this proposition is true.

The theory of evidence assigns a belief mass to each subset of the power set. Formally, a function m: \mathbb P(X) \rightarrow [0,1], is called a basic belief assignment (BBA), when it verifies two axioms. First, the mass of the empty set is zero:

m(\varnothing) = 0. \,\!

Second, the masses of the remaining members of the power set add up to a total of 1:

1 = \sum_{A \in \mathbb P(X)} m(A). \,\!

The mass m(A) of a given member of the power set, A, expresses the proportion of all relevant and available evidence that supports the claim that the actual state belongs to A but to no particular subset of A. The value of m(A) pertains only to the set A and makes no additional claims about any subsets of A, each of which has, by definition, its own mass.

From the mass assignments, the upper and lower bounds of a probability interval can be defined. This interval contains the precise probability of a set of interest (in the classical sense), and is bounded by two non-additive continuous measures called belief (or support) and plausibility:

\operatorname{bel}(A) \le P(A) \le \operatorname{pl}(A).\,\!

The belief bel(A) for a set A is defined as the sum of all the masses of (not necessarily proper) subsets of the set of interest:

\operatorname{bel}(A) = \sum_{B \mid B \subseteq A} m(B).

The plausibility pl(A) is the sum of all the masses of the sets B that intersect the set of interest A:

\operatorname{pl}(A) = \sum_{B \mid B \cap A \ne \varnothing} m(B).

The two measures are related to each other as follows:

\operatorname{pl}(A) = 1 - \operatorname{bel}(\overline{A}).\,\!

It follows from the above that you need know but one of the three (mass, belief, or plausibility) to deduce the other two, though you may need to know the values for many sets in order to calculate one of the other values for a particular set.

[edit] Dempster's rule of combination

The problem we now face is how to combine two independent sets of mass assignments. The original combination rule, known as Dempster's rule of combination, is a generalization of Bayes' rule. This rule strongly emphasises the agreement between multiple sources and ignores all the conflicting evidence through a normalization factor. Use of that rule has come under serious criticism when significant conflict in the information is encountered.

Specifically, the combination (called the joint mass) is calculated from the two sets of masses m_1\,\! and m_2\,\! in the following manner:

m_{1,2}(\varnothing) = 0 \,\!
m_{1,2}(A) = (m_1 \oplus m_2) (A) = \frac {1}{1 - K} \sum_{B \cap C = A \ne \varnothing} m_1(B) m_2(C) \,\!

where

K = \sum_{B \cap C = \varnothing} m_1(B) m_2(C). \,\!

K\,\! is a measure of the amount of conflict between the two mass sets. The normalization factor, 1-K\,\!, has the effect of completely ignoring conflict and attributing any mass associated with conflict to the null set. Consequently, this operation yields counterintuitive results in the face of significant conflict in certain contexts.

[edit] Discussion

Dempster-Shafer theory is a generalization of the Bayesian theory of subjective probability; whereas the latter requires probabilities for each question of interest, belief functions base degrees of belief (or confidence, or trust) for one question on the probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the two questions are related[2]. Put another way, it is a way of representing epistemic plausibilities but it can yield answers which contradict those arrived at using probability theory.

Often used as a method of sensor fusion, Dempster-Shafer theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster's rule[3] for combining such degrees of belief when they are based on independent items of evidence. In essence, the degree of belief in a proposition depends primarily upon the number of answers (to the related questions) containing the proposition, and the subjective probability of each answer. Also contributing are the rules of combination that reflect general assumptions about the data.

In this formalism a degree of belief (also referred to as a mass) is represented as a belief function rather than a Bayesian probability distribution. Probability values are assigned to sets of possibilities rather than single events: their appeal rests on the fact they naturally encode evidence in favor of propositions.

Dempster-Shafer theory assigns its masses to all of the subsets of the entities that comprise a system. Suppose for example that a system has five members, that is to say five independent states, exactly one of which is actual. If the original set is called S, | S | = 5, then the set of all subsets —the power set— is called 2S. Since you can express each possible subset as a binary vector (describing whether any particular member is present or not by writing a “1” or a “0” for that member's slot), it can be seen that there are 25 subsets possible (2 | S | in general), ranging from the empty subset (0, 0, 0, 0, 0) to the "everything" subset (1, 1, 1, 1, 1). The empty subset represents a contradiction, which is not true in any state, and is thus assigned a mass of zero; the remaining masses are normalised so that their total is 1. The "everything" subset is often labelled "unknown" as it represents the state where all elements are present, in the sense that you cannot tell which is actual.

[edit] Belief and plausibility

Shafer's framework allows for belief about propositions to be represented as intervals, bounded by two values, belief (or support) and plausibility:

beliefplausibility.

Belief in a hypothesis is constituted by the sum of the masses of all sets enclosed by it (i.e. the sum of the masses of all subsets of the hypothesis). It is the amount of belief that directly supports a given hypothesis at least in part, forming a lower bound. Plausibility is 1 minus the sum of the masses of all sets whose intersection with the hypothesis is empty. It is an upper bound on the possibility that the hypothesis could possibly happen, i.e. it "could possibly happen" up to that value, because there is only so much evidence that contradicts that hypothesis.

For example, suppose we have a belief of 0.5 and a plausibility of 0.8 for a proposition, say "the cat in the box is dead." This means that we have evidence that allows us to state strongly that the proposition is true with a confidence of 0.5. However, the evidence contrary to that hypothesis (i.e. "the cat is alive") only has a confidence of 0.2. The remaining mass of 0.3 (the gap between the 0.5 supporting evidence on the one hand, and the 0.2 contrary evidence on the other) is "indeterminate," meaning that the cat could either be dead or alive. This interval represents the level of uncertainty based on the evidence in your system.

Hypothesis Mass Belief Plausibility
Null (neither alive nor dead) 0 0 0
Alive 0.2 0.2 0.5
Dead 0.5 0.5 0.8
Either (alive or dead) 0.3 1.0 1.0

The null hypothesis is set to zero by definition (it corresponds to "no solution"). The orthogonal hypotheses "Alive" and "Dead" have probabilities of 0.2 and 0.5, respectively. This could correspond to "Live/Dead Cat Detector" signals, which have respective reliabilities of 0.2 and 0.5. Finally, the all-encompassing "Either" hypothesis (which simply acknowledges there is a cat in the box) picks up the slack so that the sum of the masses is 1. The support for the "Alive" and "Dead" hypotheses matches their corresponding masses because they have no subsets; support for "Either" consists of the sum of all three masses (Either, Alive, and Dead) because "Alive" and "Dead" are each subsets of "Either". The "Alive" plausibility is 1-m(Death) and the "Dead" plausibility is 1-m(Alive). Finally, the "Either" plausibility sums m(Alive)+m(Dead)+m(Either). The universal hypothesis ("Either") will always have 100% support and plausibility —it acts as a checksum of sorts.

Here is a somewhat more elaborate example where the behaviour of support and plausibility begins to emerge. We're looking at a faraway object, which can only be coloured in one of three colours (red, white, and blue) through a variety of detector modes:

Hypothesis Mass Belief Plausibility
Null 0 0 0
Red 0.35 0.35 0.56
White 0.25 0.25 0.45
Blue 0.15 0.15 0.34
Red or white 0.06 0.66 0.85
Red or blue 0.05 0.55 0.75
White or blue 0.04 0.44 0.65
Any 0.1 1.0 1.0

Although these are rather bad examples, as events of that kind would not be modeled as disjoint sets in the probability space, rather would the event "red or blue" be considered as the union of the events "red" and "blue", thereby (see the axioms of probability theory) p(red or white)>= p(white) = 0.25 and p(any)=1. Only the three disjoint events "Blue" "Red" and "White" would need to add up to 1. In fact one could model a probability measure on the space linear proportional to "plausibility" (normalized so that p(red)+p(white)+p(blue) = 1, and with the exception that still all probabilities are <=1)

[edit] Combining probability sets

Beliefs corresponding to independent pieces of information are combined using Dempster's rule of combination which is a generalisation of the special case of Bayes' theorem where events are independent (There is as yet no method of combining non-independent pieces of information). Note that the probability masses from propositions that contradict each other can also be used to obtain a measure of how much conflict there is in a system. This measure has been used as a criterion for clustering multiple pieces of seemingly conflicting evidence around competing hypotheses.

In addition, one of the computational advantages of the Dempster-Shafer framework is that priors and conditionals need not be specified, unlike Bayesian methods which often use a symmetry (minimax error) argument to assign prior probabilities to random variables (e.g. assigning 0.5 to binary values for which no information is available about which is more likely). However, any information contained in the missing priors and conditionals is not used in the Dempster-Shafer framework unless it can be obtained indirectly - and arguably is then available for calculation using Bayes equations.

Dempster-Shafer theory allows one to specify a degree of ignorance in this situation instead of being forced to supply prior probabilities which add to unity. This sort of situation, and whether there is a real distinction between risk and ignorance, has been extensively discussed by statisticians and economists. See, for example, the contrasting views of Daniel Ellsberg, Howard Raiffa, Kenneth Arrow and Frank Knight.

[edit] Critics

Judea Pearl (1988a, chapter 9[4]; 1988b[5] and 1990)[6]; has argued that it is misleading to interpret belief functions as representing either "probabilities of an event," or "the confidence one has in the probabilities assigned to various outcomes," or "degrees of belief (or confidence, or trust) in a proposition," or "degree of ignorance in a situation." Instead, belief functions represent the probability that a given proposition is provable from a set of other propositions, to which probabilities are assigned. Confusing probabilities of truth with probabilities of provability may lead to counterintuitive results in reasoning tasks such as (1) representing incomplete knowledge, (2) belief-updating and (3) evidence pooling. He further demonstrated that, if partial knowledge is encoded and updated by belief function methods, the resulting beliefs cannot serve as a basis for rational decisions.

Kłopotek and Wierzchoń: [7] proposed to interpret the Dempster-Shafer theory in terms of statistics of decision tables (of the rough set theory), whereby the operator of combining evidence should be seen as relational join of decision tables. In another interpretation [8] they propose to view this theory as describing destructive material processing (under loss of properties), e.g. like in some semiconductor production processes. Under both interpretations reasoning in DST gives correct results, contrary to the earlier probabilistic interpretations, criticized by Pearl in the cited papers and by other researches.

[edit] See also

[edit] References

  1. ^ Shafer, Glenn; A Mathematical Theory of Evidence, Princeton University Press, 1976
  2. ^ Shafer, Glenn; Dempster-Shafer theory, 2002
  3. ^ Dempster, Arthur P.; A generalization of Bayesian inference, Journal of the Royal Statistical Society, Series B, Vol. 30, pp. 205-247, 1968
  4. ^ Pearl, J. (1988a), Probabilistic Reasoning in Intelligent Systems, (Revised Second Printing) San Mateo, CA: Morgan Kaufmann.
  5. ^ Pearl, J. (1988b) "On Probability Intervals," International Journal of Approximate Reasoning, 2(3):211-216.
  6. ^ Pearl, J. (1990) Reasoning with Belief Functions: An Analysis of Compatibility. The International Journal of Approximate Reasoning, 4(5/6):363-389.
  7. ^ M.A. Kłopotek, S.T. Wierzchoń: A New Qualitative Rough-Set Approach to Modeling Belief Functions. [in:] L. Polkowski, A, Skowron eds: Rough Sets And Current Trends In Computing. Proc. 1st International Conference RSCTC'98, Warsaw, June 22 - 26 1998, Lecture Notes in Artificial Intelligence 1424, Springer-Verlag, pp. 346-353.
  8. ^ M.A.Kłopotek, S.T.Wierzchoń: Empirical Models for the Dempster-Shafer Theory. in: Srivastava, R.P., Mock, T.J., (Eds.). Belief Functions in Business Decisions. Series: Studies in Fuzziness and Soft Computing. VOL. 88 Springer-Verlag. March 2002. ISBN 3-7908-1451-2, pp. 62-112