Stopping time

From Wikipedia, the free encyclopedia

Example of a stopping time: a hitting time of Brownian motion
Example of a stopping time: a hitting time of Brownian motion

In probability theory, in particular in the study of stochastic processes, a stopping time is a specific type of "random time".

The theory of stopping rules and stopping times can be analysed in probability and statistics, notably in the optional stopping theorem. Also, stopping times are frequently applied in mathematical proofs, to "tame the continuum of time", as Chung so nicely put it in his book (1982).

Contents

[edit] Definition

A stopping time with respect to a sequence of random variables X1, X2, ... is a random variable τ with the property that for each t, the occurrence or non-occurrence of the event τ = t depends only on the values of X1, X2, ..., Xt, and furthermore Pr(τ < ∞) = 1. Stopping times occur in decision theory, in which a stopping rule is characterized as a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some time.

Another, more general definition may be given in terms of a filtration: Let (I, \leq) be an ordered index set (often I=[0,\infty) or a compact subset therof), and let (\Omega, \mathcal{F}, \mathcal{F}_t, \mathbb{P}) be a filtered probability space, i.e. a probability space equipped with a filtration. Then a random variable \tau : \Omega \to I is called a stopping time if \{ \tau \leq t \} \in \mathcal{F}_{t} for all t in I. Often, to avoid confusion, we call it a \mathcal{F}_t-stopping time and explicitly specify the filtration.

In other words, for τ to be a stopping time, it should be possible to decide whether or not \{ \tau \leq t \} has occurred on the basis of the knowledge of \mathcal{F}_t

As noted above, it is frequently required that τ be almost surely finite, although some authors omit this requirement.

[edit] Examples

To illustrate some examples of random times that are stopping rules and some that are not, consider a gambler playing roulette with a typical house edge, starting with $100:

  • Playing one, and only one, game corresponds to the stopping time τ = 1, and is a stopping rule.
  • Playing until she either runs out of money or has played 500 games is a stopping rule.
  • Playing until she doubles her money (borrowing if necessary if she goes into debt) is not a stopping rule, as there is a positive probability that she will never double her money.
  • Playing until she either doubles her money or runs out of money is a stopping rule, even though there is potentially no limit to the number of games she plays, since the probability that she stops in a finite time is 1.
  • Playing until she is the maximum amount ahead she will ever be is not a stopping rule and does not provide a stopping time, as it requires information about the future as well as the present and past.

[edit] Localization

Stopping times are frequently used to generalize certain properties of stochastic processes to situations in which the required property is satisfied in only a local sense. First, if X is a process and τ is a stopping time, then Xτ is used to denote the process X stopped at time τ.

 X^\tau_t=X_{\min(t,\tau)}

Then, X is said to locally satisfy some property P if there exists a sequence of stopping times τn, which increases to infinity and for which the processes 1_{\{\tau_n>0\}}X^{\tau_n} satisfy property P. Common examples, with time index set I = [0,∞), are as follows;

  • (Local martingale) A process X is a local martingale if it is càdlàg and there exists a sequence of stopping times τn increasing to infinity, such that 1_{\{\tau_n>0\}}X^{\tau_n} is a martingale for each n.
  • (Locally integrable) A non-negative and increasing process X is locally integrable if there exists a sequence of stopping times τn increasing to infinity, such that \mathbb{E}(1_{\{\tau_n>0\}}X^{\tau_n})<\infty for each n.

[edit] Types of stopping times

Stopping times, with time index set I = [0,∞), are often divided into into one of several types depending on whether it is possible to predict when they are about to occur.

A stopping time τ is predictable if it is equal to the limit of an increasing sequence of stopping times τn satisfying τn < τ whenever τ > 0. The sequence τn is said to announce τ, and predictable stopping times are sometimes known as announceable. Examples of predictable stopping times are hitting times of continuous and adapted processes. If τ is the first time at which a continuous and real valued process X is equal to some value a, then it is announced by the sequence τn, where τn is the first time at which X is within a distance of 1/n of a.

Accessible stopping times are those that can be covered by a sequence of predictable times. That is, stopping time τ is accessible if, P(τ=τn for some n)=1, where τn are predictable times.

A stopping time τ is totally inaccessible if it can never be announced by an increasing sequence of stopping times. Equivalently, P(τ = σ < ∞) = 0 for every predictable time σ. Examples of totally inaccessible stopping times include the jump times of Poisson processes.

Every stopping time τ can be uniquely decomposed into an accessible and totally inaccessible time. That is, there exists a unique accessible stopping time σ and totally inaccessible time υ such that τ = σ whenever σ < ∞, τ = υ whenever υ < ∞, and τ = ∞ whenever σ = υ = ∞. Note that in the statement of this decomposition result, stopping times do not have to be almost surely finite, and can equal ∞.

[edit] See also

[edit] References

  • Chung, Kai Lai (1982). Lectures from Markov processes to Brownian motion, Grundlehren der Mathematischen Wissenschaften No. 249. New York: Springer-Verlag. ISBN 0-387-90618-5. 
  • Revuz, Daniel and Yor, Marc (1999). Continuous martingales and Brownian motion, Third edition, Grundlehren der Mathematischen Wissenschaften No. 293, Berlin: Springer-Verlag. ISBN 3-540-64325-7. 
  • Protter, Philip E. (2005). Stochastic integration and differential equations, Second edition (version 2.1, corrected third printing), Stochastic Modelling and Applied Probability No. 21, Berlin: Springer-Verlag. ISBN 3-540-00313-4. 

[edit] Further reading

  • Thomas S. Ferguson. "Who solved the secretary problem?", Stat. Sci. vol. 4, 282-296, 1989.