Optimal stopping

In mathematics, the theory of optimal stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.

Definition

Discrete time case

Stopping rule problems are associated with two objects:

  1. A sequence of random variables X_1, X_2, \ldots, whose joint distribution is something assumed to be known
  2. A sequence of 'reward' functions (y_i)_{i\ge 1} which depend on the observed values of the random variables in 1.:
    y_i=y_i (x_1, \ldots ,x_i)

Given those objects, the problem is as follows:

Continuous time case

Consider a gain processes G=(G_t)_{t\ge 0} defined on a filtered probability space (\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathbb{P}) and assume that G is adapted to the filtration. The optimal stopping problem is to find the stopping time \tau^* which maximizes the expected gain

 V_t^T = \mathbb{E} G_{\tau^*} = \sup_{t\le \tau \le T} \mathbb{E} G_\tau

where V_t^T is called the value function. Here T can take value \infty.

A more specific formulation is as follows. We consider an adapted strong Markov process X = (X_t)_{t\ge 0} defined on a filtered probability space (\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathbb{P}_x) where \mathbb{P}_x denotes the probability measure where the stochastic process starts at x. Given continuous functions M,L, and K, the optimal stopping problem is

 V(x) = \sup_{0\le \tau \le T} \mathbb{E}_x \left( M(X_\tau) + \int_0^\tau L(X_t) dt + \sup_{0\le t\le\tau} K(X_t) \right).

This is sometimes called the MLS (which stand for Mayer, Lagrange, and supremum, respectively) formulation.[1]

Solution methods

There are generally two approaches of solving optimal stopping problems.[1] When the underlying process (or the gain process) is described by its unconditional finite-dimensional distributions, the appropriate solution technique is the martingale approach, so called because it uses martingale theory, the most important concept being the Snell envelope. In the discrete time case, if the planning horizon T is finite, the problem can also be easily solved by dynamic programming.

When the underlying process is determined by a family of (conditional) transition functions leading to a Markovian family of transition probabilities, very powerful analytical tools provided by the theory of Markov processes can often be utilized and this approach is referred to as the Markovian method. The solution is usually obtained by solving the associated free-boundary problems (Stefan problems).

A jump diffusion result

Let Y_t be a Lévy diffusion in \mathbb{R}^k given by the SDE

 dY_t = b(Y_t) dt + \sigma (Y_t) dB_t + \int_{\mathbb{R}^k} \gamma (Y_{t-},z)\bar{N}(dt,dz),\quad Y_0 = y

where  B is an  m-dimensional Brownian motion,  \bar{N} is an  l -dimensional compensated Poisson random measure,  b:\mathbb{R}^k \to \mathbb{R}^k ,  \sigma:\mathbb{R}^k \to \mathbb{R}^{k\times m} , and  \gamma:\mathbb{R}^k \times \mathbb{R}^k \to \mathbb{R}^{k\times l} are given functions such that a unique solution  (Y_t) exists. Let  \mathcal{S}\subset \mathbb{R}^k be an open set (the solvency region) and

 \tau_\mathcal{S} = \inf\{ t>0: Y_t \notin \mathcal{S} \}

be the bankruptcy time. The optimal stopping problem is:

V(y) = \sup_{\tau \le \tau_\mathcal{S}} J^\tau (y) = \sup_{\tau \le \tau_\mathcal{S}} \mathbb{E}_y \left[ M(Y_\tau) + \int_0^\tau L(Y_t) dt \right].

It turns out that under some regularity conditions,[2] the following verification theorem holds:

If a function \phi:\bar{\mathcal{S}}\to \mathbb{R} satisfies

then  \phi(y) \ge V(y) for all  y\in \bar{\mathcal{S}} . Moreover, if

Then  \phi(y) = V(y) for all  y\in \bar{\mathcal{S}} and  \tau^* = \inf\{ t>0: Y_t\notin D\} is an optimal stopping time.

These conditions can also be written is a more compact form (the integro-variational inequality):

Examples

Coin tossing

(Example where \mathbb{E}(y_i) converges)

You have a fair coin and are repeatedly tossing it. Each time, before it is tossed, you can choose to stop tossing it and get paid (in dollars, say) the average number of heads observed.

You wish to maximise the amount you get paid by choosing a stopping rule. If Xi (for i ≥ 1) forms a sequence of independent, identically distributed random variables with Bernoulli distribution

\text{Bern}\left(\frac{1}{2}\right),

and if

y_i = \frac 1 i \sum_{k=1}^{i} X_k

then the sequences (X_i)_{i\geq 1}, and (y_i)_{i\geq 1} are the objects associated with this problem.

House selling

(Example where \mathbb{E}(y_i) does not necessarily converge)

You have a house and wish to sell it. Each day you are offered X_n for your house, and pay k to continue advertising it. If you sell your house on day n, you will earn y_n, where y_n = (X_n - nk).

You wish to maximise the amount you earn by choosing a stopping rule.

In this example, the sequence (X_i) is the sequence of offers for your house, and the sequence of reward functions is how much you will earn.

Secretary problem

Main article: Secretary problem

(Example where (X_i) is a finite sequence)

You are observing a sequence of objects which can be ranked from best to worst. You wish to choose a stopping rule which maximises your chance of picking the best object.

Here, if R_1, \ldots, R_n (n is some large number, perhaps) are the ranks of the objects, and y_i is the chance you pick the best object if you stop intentionally rejecting objects at step i, then (R_i) and (y_i) are the sequences associated with this problem. This problem was solved in the early 1960s by several people. An elegant solution to the secretary problem and several modifications of this problem is provided by the more recent odds algorithm of optimal stopping (Bruss algorithm).

Search theory

Main article: Search theory

Economists have studied a number of optimal stopping problems similar to the 'secretary problem', and typically call this type of analysis 'search theory'. Search theory has especially focused on a worker's search for a high-wage job, or a consumer's search for a low-priced good.

Option trading

In the trading of options on financial markets, the holder of an American option is allowed to exercise the right to buy (or sell) the underlying asset at a predetermined price at any time before or at the expiry date. Therefore, the valuation of American options is essentially an optimal stopping problem. Consider a classical Black-Scholes set-up and let  r be the risk-free interest rate and  \delta and  \sigma be the dividend rate and volatility of the stock. The stock price  S follows geometric Brownian motion

 S_t = S_0 \exp\left\{ \left(r - \delta - \frac{\sigma^2}{2}\right) t + \sigma B_t \right\}

under the risk-neutral measure.

When the option is perpetual, the optimal stopping problem is

 V(x) = \sup_{\tau} \mathbb{E}_x \left[ e^{-r\tau} g(S_\tau) \right]

where the payoff function is  g(x) = (x-K)^+ for a call option and  g(x) = (K-x)^+ for a put option. The variational inequality is

 \max\left\{ \frac{1}{2} \sigma^2 x^2 V''(x) + (r-\delta) x V'(x) - rV(x), g(x) - V(x) \right\} = 0

for all x \in (0,\infty)\setminus \{b\} where  b is the exercise boundary. The solution is known to be[3]

On the other hand, when the expiry date is finite, the problem is associated with a 2-dimensional free-boundary problem with no known closed-form solution. Various numerical methods can however be used. See Black–Scholes model #American options for various valuation methods here, as well as Fugit for a discrete, tree based, calculation of the optimal time to exercise.

See also

References

  1. 1 2 Peskir, Goran; Shiryaev, Albert (2006). "Optimal Stopping and Free-Boundary Problems". Lectures in Mathematics. ETH Zürich. doi:10.1007/978-3-7643-7390-0. ISBN 978-3-7643-2419-3.
  2. Øksendal, B.; Sulem, A. S. (2007). "Applied Stochastic Control of Jump Diffusions". doi:10.1007/978-3-540-69826-5. ISBN 978-3-540-69825-8.
  3. Karatzas, Ioannis; Shreve, Steven E. (1998). "Methods of Mathematical Finance". Stochastic Modelling and Applied Probability 39. doi:10.1007/b98840. ISBN 978-0-387-94839-3.

External links

This article is issued from Wikipedia - version of the Thursday, January 14, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.