Uncertainty theory

Not to be confused with Uncertainty principle.

Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms. It was founded by Baoding Liu [1] in 2007 and refined in 2009.[2]

Mathematical measures of the likelihood of an event being true include probability theory, capacity, fuzzy logic, possibility, and credibility, as well as uncertainty.

Five axioms

Axiom 1. (Normality Axiom) \mathcal{M}\{\Gamma\}=1\text{ for the universal set }\Gamma.

Axiom 2. (Monotonicity Axiom) \mathcal{M}\{\Lambda_1\}\le\mathcal{M}\{\Lambda_2\}\text{ whenever }\Lambda_1\subset\Lambda_2.

Axiom 3. (Self-Duality Axiom) \mathcal{M}\{\Lambda\}+\mathcal{M}\{\Lambda^c\}=1\text{ for any event }\Lambda.

Axiom 4. (Countable Subadditivity Axiom) For every countable sequence of events Λ1, Λ2, ..., we have

\mathcal{M}\left\{\bigcup_{i=1}^\infty\Lambda_i\right\}\le\sum_{i=1}^\infty\mathcal{M}\{\Lambda_i\}.

Axiom 5. (Product Measure Axiom) Let (\Gamma_k,\mathcal{L}_k,\mathcal{M}_k) be uncertainty spaces for k=1,2,\cdots,n. Then the product uncertain measure \mathcal{M} is an uncertain measure on the product σ-algebra satisfying

\mathcal{M}\left\{\prod_{i=1}^n\Lambda_i\right\}=\underset{1\le i\le n}{\operatorname{min} }\mathcal{M}_i\{\Lambda_i\}.

Principle. (Maximum Uncertainty Principle) For any event, if there are multiple reasonable values that an uncertain measure may take, then the value as close to 0.5 as possible is assigned to the event.

Uncertain variables

An uncertain variable is a measurable function ξ from an uncertainty space (\Gamma,L,M) to the set of real numbers, i.e., for any Borel set B of real numbers, the set \{\xi\in B\}=\{\gamma \in \Gamma|\xi(\gamma)\in B\} is an event.

Uncertainty distribution

Uncertainty distribution is inducted to describe uncertain variables.

Definition:The uncertainty distribution \Phi(x):R \rightarrow [0,1] of an uncertain variable ξ is defined by \Phi(x)=M\{\xi\leq x\}.

Theorem(Peng and Iwamura, Sufficient and Necessary Condition for Uncertainty Distribution) A function \Phi(x):R \rightarrow [0,1] is an uncertain distribution if and only if it is an increasing function except \Phi (x) \equiv 0 and \Phi (x)\equiv 1.

Independence

Definition: The uncertain variables \xi_1,\xi_2,\ldots,\xi_m are said to be independent if

M\{\cap_{i=1}^m(\xi \in B_i)\}=\mbox{min}_{1\leq i \leq m}M\{\xi_i \in B_i\}

for any Borel sets B_1,B_2,\ldots,B_m of real numbers.

Theorem 1: The uncertain variables \xi_1,\xi_2,\ldots,\xi_m are independent if

M\{\cup_{i=1}^m(\xi \in B_i)\}=\mbox{max}_{1\leq i \leq m}M\{\xi_i \in B_i\}

for any Borel sets B_1,B_2,\ldots,B_m of real numbers.

Theorem 2: Let \xi_1,\xi_2,\ldots,\xi_m be independent uncertain variables, and f_1,f_2,\ldots,f_m measurable functions. Then f_1(\xi_1),f_2(\xi_2),\ldots,f_m(\xi_m) are independent uncertain variables.

Theorem 3: Let \Phi_i be uncertainty distributions of independent uncertain variables \xi_i,\quad i=1,2,\ldots,m respectively, and \Phi the joint uncertainty distribution of uncertain vector (\xi_1,\xi_2,\ldots,\xi_m). If \xi_1,\xi_2,\ldots,\xi_m are independent, then we have

\Phi(x_1, x_2, \ldots, x_m)=\mbox{min}_{1\leq i \leq m}\Phi_i(x_i)

for any real numbers x_1, x_2, \ldots, x_m.

Operational law

Theorem: Let \xi_1,\xi_2,\ldots,\xi_m be independent uncertain variables, and f: R^n \rightarrow R a measurable function. Then \xi=f(\xi_1,\xi_2,\ldots,\xi_m) is an uncertain variable such that

\mathcal{M}\{\xi\in B\}=\begin{cases} \underset{f(B_1,B_2,\cdots,B_n)\subset B}{\operatorname{sup} }\;\underset{1\le k\le n}{\operatorname{min} }\mathcal{M}_k\{\xi_k\in B_k\}, & \text{if } \underset{f(B_1,B_2,\cdots,B_n)\subset B}{\operatorname{sup} }\;\underset{1\le k\le n}{\operatorname{min} }\mathcal{M}_k\{\xi_k\in B_k\} > 0.5 \\ 1-\underset{f(B_1,B_2,\cdots,B_n)\subset B^c}{\operatorname{sup} }\;\underset{1\le k\le n}{\operatorname{min} }\mathcal{M}_k\{\xi_k\in B_k\}, & \text{if } \underset{f(B_1,B_2,\cdots,B_n)\subset B^c}{\operatorname{sup} }\;\underset{1\le k\le n}{\operatorname{min} }\mathcal{M}_k\{\xi_k\in B_k\} > 0.5 \\ 0.5, & \text{otherwise} \end{cases}

where B, B_1, B_2, \ldots, B_m are Borel sets, and f( B_1, B_2, \ldots, B_m)\subset B meansf(x_1, x_2, \ldots, x_m) \in B for anyx_1 \in B_1, x_2 \in B_2, \ldots,x_m \in B_m.

Expected Value

Definition: Let \xi be an uncertain variable. Then the expected value of \xi is defined by

E[\xi]=\int_0^{+\infty}M\{\xi\geq r\}dr-\int_{-\infty}^0M\{\xi\leq r\}dr

provided that at least one of the two integrals is finite.

Theorem 1: Let \xi be an uncertain variable with uncertainty distribution \Phi. If the expected value exists, then

E[\xi]=\int_0^{+\infty}(1-\Phi(x))dx-\int_{-\infty}^0\Phi(x)dx.

Theorem 2: Let \xi be an uncertain variable with regular uncertainty distribution \Phi. If the expected value exists, then

E[\xi]=\int_0^1\Phi^{-1}(\alpha)d\alpha.

Theorem 3: Let \xi and \eta be independent uncertain variables with finite expected values. Then for any real numbers a and b, we have

E[a\xi+b\eta]=aE[\xi]+b[\eta].

Variance

Definition: Let \xi be an uncertain variable with finite expected value e. Then the variance of \xi is defined by

V[\xi]=E[(\xi-e)^2].

Theorem: If \xi be an uncertain variable with finite expected value, a and b are real numbers, then

V[a\xi+b]=a^2V[\xi].

Critical value

Definition: Let \xi be an uncertain variable, and \alpha\in(0,1]. Then

\xi_{sup}(\alpha)=\mbox{sup}\{r|M\{\xi\geq r\}\geq\alpha\}

is called the α-optimistic value to \xi, and

\xi_{inf}(\alpha)=\mbox{inf}\{r|M\{\xi\leq r\}\geq\alpha\}

is called the α-pessimistic value to \xi.

Theorem 1: Let \xi be an uncertain variable with regular uncertainty distribution \Phi. Then its α-optimistic value and α-pessimistic value are

\xi_{sup}(\alpha)=\Phi^{-1}(1-\alpha),
\xi_{inf}(\alpha)=\Phi^{-1}(\alpha).

Theorem 2: Let \xi be an uncertain variable, and \alpha\in(0,1]. Then we have

Theorem 3: Suppose that \xi and \eta are independent uncertain variables, and \alpha\in(0,1]. Then we have

(\xi + \eta)_{sup}(\alpha)=\xi_{sup}(\alpha)+\eta_{sup}{\alpha},

(\xi + \eta)_{inf}(\alpha)=\xi_{inf}(\alpha)+\eta_{inf}{\alpha},

(\xi \vee \eta)_{sup}(\alpha)=\xi_{sup}(\alpha)\vee\eta_{sup}{\alpha},

(\xi \vee \eta)_{inf}(\alpha)=\xi_{inf}(\alpha)\vee\eta_{inf}{\alpha},

(\xi \wedge \eta)_{sup}(\alpha)=\xi_{sup}(\alpha)\wedge\eta_{sup}{\alpha},

(\xi \wedge \eta)_{inf}(\alpha)=\xi_{inf}(\alpha)\wedge\eta_{inf}{\alpha}.

Entropy

Definition: Let \xi be an uncertain variable with uncertainty distribution \Phi. Then its entropy is defined by

H[\xi]=\int_{-\infty}^{+\infty}S(\Phi(x))dx

where S(x)=-t\mbox{ln}(t)-(1-t)\mbox{ln}(1-t).

Theorem 1(Dai and Chen): Let \xi be an uncertain variable with regular uncertainty distribution \Phi. Then

H[\xi]=\int_0^1\Phi^{-1}(\alpha)\mbox{ln}\frac{\alpha}{1-\alpha}d\alpha.

Theorem 2: Let \xi and \eta be independent uncertain variables. Then for any real numbers a and b, we have

H[a\xi+b\eta]=|a|E[\xi]+|b|E[\eta].

Theorem 3: Let \xi be an uncertain variable whose uncertainty distribution is arbitrary but the expected value e and variance \sigma^2. Then

H[\xi]\leq\frac{\pi\sigma}{\sqrt{3}}.

Inequalities

Theorem 1(Liu, Markov Inequality): Let \xi be an uncertain variable. Then for any given numbers t > 0 and p > 0, we have

M\{|\xi|\geq t\}\leq \frac{E[|\xi|^p]}{t^p}.

Theorem 2 (Liu, Chebyshev Inequality) Let \xi be an uncertain variable whose variance V[\xi] exists. Then for any given number t > 0, we have

M\{|\xi-E[\xi]|\geq t\}\leq \frac{V[\xi]}{t^2}.

Theorem 3 (Liu, Holder’s Inequality) Let p and q be positive numbers with 1/p + 1/q = 1, and let \xi and \eta be independent uncertain variables with E[|\xi|^p]< \infty and E[|\eta|^q] < \infty. Then we have

E[|\xi\eta|]\leq \sqrt[p]{E[|\xi|^p]} \sqrt[p]{E[\eta|^p]}.

Theorem 4:(Liu [127], Minkowski Inequality) Let p be a real number with p\leq 1, and let \xi and \eta be independent uncertain variables with E[|\xi|^p]< \infty and E[|\eta|^q] < \infty. Then we have

\sqrt[p]{E[|\xi+\eta|^p]}\leq \sqrt[p]{E[|\xi|^p]}+\sqrt[p]{E[\eta|^p]}.

Convergence concept

Definition 1: Suppose that \xi,\xi_1,\xi_2,\ldots are uncertain variables defined on the uncertainty space (\Gamma,L,M). The sequence \{\xi_i\} is said to be convergent a.s. to \xi if there exists an event \Lambda with M\{\Lambda\} = 1 such that

\mbox{lim}_{i\rightarrow\infty}|\xi_i(\gamma)-\xi(\gamma)|=0

for every \gamma\in\Lambda. In that case we write \xi_i\rightarrow \xi,a.s.

Definition 2: Suppose that \xi,\xi_1,\xi_2,\ldots are uncertain variables. We say that the sequence \{\xi_i\} converges in measure to \xi if

\mbox{lim}_{i\rightarrow\infty}M\{|\xi_i-\xi|\leq \varepsilon \}=0

for every \varepsilon>0.

Definition 3: Suppose that \xi,\xi_1,\xi_2,\ldots are uncertain variables with finite expected values. We say that the sequence \{\xi_i\} converges in mean to \xi if

\mbox{lim}_{i\rightarrow\infty}E[|\xi_i-\xi|]=0.

Definition 4: Suppose that \Phi,\phi_1,\Phi_2,\ldots are uncertainty distributions of uncertain variables \xi,\xi_1,\xi_2,\ldots, respectively. We say that the sequence \{\xi_i\} converges in distribution to \xi if \Phi_i\rightarrow\Phi at any continuity point of \Phi.

Theorem 1: Convergence in Mean \Rightarrow Convergence in Measure \Rightarrow Convergence in Distribution. However, Convergence in Mean \nLeftrightarrow Convergence Almost Surely \nLeftrightarrow Convergence in Distribution.

Conditional uncertainty

Definition 1: Let (\Gamma,L,M) be an uncertainty space, and A,B\in L. Then the conditional uncertain measure of A given B is defined by

\mathcal{M}\{A\vert B\}=\begin{cases} \displaystyle\frac{\mathcal{M}\{A\cap B\} }{\mathcal{M}\{B\} }, &\displaystyle\text{if }\frac{\mathcal{M}\{A\cap B\} }{\mathcal{M}\{B\} }<0.5 \\ \displaystyle 1 - \frac{\mathcal{M}\{A^c\cap B\} }{\mathcal{M}\{B\} }, &\displaystyle\text{if } \frac{\mathcal{M}\{A^c\cap B\} }{\mathcal{M}\{B\} }<0.5 \\ 0.5, & \text{otherwise} \end{cases}
\text{provided that } \mathcal{M}\{B\}>0

Theorem 1: Let (\Gamma,L,M) be an uncertainty space, and B an event with M\{B\} > 0. Then M{·|B} defined by Definition 1 is an uncertain measure, and (\Gamma,L,M\{\mbox{·}|B\})is an uncertainty space.

Definition 2: Let \xi be an uncertain variable on (\Gamma,L,M). A conditional uncertain variable of \xi given B is a measurable function \xi|_B from the conditional uncertainty space (\Gamma,L,M\{\mbox{·}|_B\}) to the set of real numbers such that

\xi|_B(\gamma)=\xi(\gamma),\forall \gamma \in \Gamma.

Definition 3: The conditional uncertainty distribution \Phi\rightarrow[0, 1] of an uncertain variable \xi given B is defined by

\Phi(x|B)=M\{\xi\leq x|B\}

provided that M\{B\}>0.

Theorem 2: Let \xi be an uncertain variable with regular uncertainty distribution \Phi(x), and t a real number with \Phi(t) < 1. Then the conditional uncertainty distribution of \xi given \xi> t is

\Phi(x\vert(t,+\infty))=\begin{cases} 0, & \text{if }\Phi(x)\le\Phi(t)\\ \displaystyle\frac{\Phi(x)}{1-\Phi(t)}\and 0.5, & \text{if }\Phi(t)<\Phi(x)\le(1+\Phi(t))/2 \\ \displaystyle\frac{\Phi(x)-\Phi(t)}{1-\Phi(t)}, & \text{if }(1+\Phi(t))/2\le\Phi(x) \end{cases}

Theorem 3: Let \xi be an uncertain variable with regular uncertainty distribution \Phi(x), and t a real number with \Phi(t)>0. Then the conditional uncertainty distribution of \xi given \xi\leq t is

\Phi(x\vert(-\infty,t])=\begin{cases} \displaystyle\frac{\Phi(x)}{\Phi(t)}, & \text{if }\Phi(x)\le\Phi(t)/2 \\ \displaystyle\frac{\Phi(x)+\Phi(t)-1}{\Phi(t)}\or 0.5, & \text{if }\Phi(t)/2\le\Phi(x)<\Phi(t) \\ 1, & \text{if }\Phi(t)\le\Phi(x) \end{cases}

Definition 4: Let \xi be an uncertain variable. Then the conditional expected value of \xi given B is defined by

E[\xi|B]=\int_0^{+\infty}M\{\xi\geq r|B\}dr-\int_{-\infty}^0M\{\xi\leq r|B\}dr

provided that at least one of the two integrals is finite.

References

  1. Baoding Liu, Uncertainty Theory, 2nd ed., Springer-Verlag, Berlin, 2007.
  2. Baoding Liu, Uncertainty Theory, 4th ed., http://orsc.edu.cn/liu/ut.pdf.