Ruin Probabilities

A Simple Random Walk

A simple random walk models the wealth of a gambler betting on fair coin flips. To start, let \{X_{i}\}_{i=1}^{\infty} be a sequence of independent random variables, each taking values 1 or -1 with equal probability \frac{1}{2}. Therefore, at each turn the gambler can either win one dollar or lose one dollar.

By construction, the sequence \{X_{i}\} is iid with mean \operatorname{E}(X_{i}) = 0.5 \times 1 + 0.5 (-1) = 0 and variance \operatorname{V}(X_{i}) = 0.5 \times (1 - 0)^{2} + 0.5 \times (-1 - 0)^{2} = 1.

Let S_{0} be the initial wealth of the gambler. For n \geq 1, define S_{n} = S_{0} + \sum_{i=1}^{n} X_{i}. We will assume that the wealth may become negative, which requires the gambler to honor any potential losses.

Define \tau as the first time n > 0 when S_{n} reaches either A or -B: \tau = \min\{n > 0 : S_{n} = A \text{ or } S_{n} = -B\}. At the random time \tau, S_\tau is either A or -B, and we seek to compute the probability of the gambler winning A dollars before losing B dollars, that is P(S_{\tau} = A \mid S_{0} = 0). Also, it is almost natural to ask how long do we need to wait for the gambler to either win A dollars or to lose B dollars. We will denote this expectation by \operatorname{E}(\tau \mid S_{0} = 0).

Behavior of \tau

To solve these problems, we need first to make sure that the questions have an answer! That is, how do we know for sure that the gambler’s net winnings will eventually reach A or —B?

Clearly, if the gambler wins A + B times in a row then she is guaranteed to reach a wealth level of A even if she starts at S_{0} = -B. Let’s denote by E_{k} the even that the gamblers wins consecutively in the interval [k(A + B) + 1, (k + 1)(A + B)]. Note that if we take k = 0, A = 2 and B = -1, then A + B = 3. If you win at times n = 0, n = 1, and n = 2 you will have 3 dollars which guarantees that you will reach A = 2 even if you start at S_{0} = -1.

Clearly, the events \{E_{k}\} are independent and p = \operatorname{P}(E_{k}) = \left(\frac{1}{2}\right)^{A + B} > 0. What if n events E_{k} occur and you are still playing? Then we know that \operatorname{P}(\tau \geq n (A + B) \mid S_{0} = 0) \leq \operatorname{P}\left(\cap_{k = 0}^{n - 1} E_{k}^{C}\right) = (1 - p)^n \xrightarrow[n \rightarrow \infty]{} 0. Since we also have that \operatorname{P}(\tau = \infty \mid S_{0} = 0) \leq \operatorname{P}(\tau \geq n (A + B) \mid S_{0} = 0), for all n \geq 0, we see that \operatorname{P}(\tau = \infty \mid S_{0} = 0) = 0. In words, it must take a finite time for the wealth of the gambler to reach A or -B.

Furthermore, for d = 1, 2, \ldots we also have that for k \geq 1 \tau^{d} \mathbf{1}_{\{(k - 1) (A + B) < \tau \leq k (A + B)\}} \leq k^{d} (A + B)^{d} \mathbf{1}_{\{(k - 1) (A + B) < \tau\}}. Summing over k = 1, 2, \ldots we find that \tau^{d} \leq \sum_{k = 1}^{\infty} k^{d} (A + B)^{d} \mathbf{1}_{\{(k - 1) (A + B) < \tau\}}. Finally, taking expectations on both sides yields \operatorname{E}(\tau^{d}) \leq \sum_{k = 1}^{\infty} k^{d} (A + B)^{d} \operatorname{P}((k - 1) (A + B) < \tau) \leq \sum_{k = 1}^{\infty} k^{d} (A + B)^{d} (1 - p)^{k}. The series on the left converges since \lim_{k \rightarrow \infty} \frac{(k + 1)^{d} (A + B)^{d} (1 - p)^{k + 1}}{k^{d} (A + B)^{d} (1 - p)^{k}} = \lim_{k \rightarrow \infty} \left(1 + \frac{1}{k}\right)^{d} (1 - p) = 1 - p < 1. Therefore, we have shows that \operatorname{E}(\tau^{d}) < \infty for any integer d \geq 1. We compile the results in the following result.

WarningStopping Time of the Random Walk

Let \{X_{i}\}_{i=1}^{\infty} be a sequence of independent random variables, each taking values 1 or -1 with equal probability \frac{1}{2}. For n \geq 1, define S_{n} = S_{0} + \sum_{i=1}^{n} X_{i}, where -B < S_{0} < A, and A and B are two positive integers. Define \tau as the first time n > 0 when S_{n} reaches either A or -B: \tau = \min\{n > 0 : S_{n} = A \text{ or } S_{n} = -B\}. We then have that \operatorname{P}(\tau = \infty) = 0 and \operatorname{E}(\tau^{d}) < \infty for any integer d \geq 1.

Solving for the Probability

In this section we solve for P(S_{\tau} = A \mid S_{0} = 0). Interestingly, since the gambler can keep playing without any explicit time expiration, the problem does not depend on time but only on the wealth of the gambler at each point in time. In other words, the only relevant state variable is the wealth level.

Let’s write f(K) = P(S_{\tau} = A \mid S_{0} = K). In words, f(K) denotes the probability of reaching A before -B if we start with a wealth level -B \leq K \leq A. Clearly, f(-B) = 0 since if we start at S_{0} = - B the game is over. Also, f(A) = 1 since if we start at S_{0} = A we have reached our objective. These are our boundary conditions that will allow us to solve for f(K).

Now, we need to get an equation. We note that if we start with S_{0} = K and we win, then our wealth increases to K + 1 and we start all over as is time is back at zero but our initial wealth is K + 1. Similarly, if we start with S_{0} = K and we lose, then our wealth decreases to K - 1 and the problem is the same with an initial wealth of K - 1. Thus, it must be the case that f(K) = \frac{1}{2} f(K - 1) + \frac{1}{2} f(K + 1). \tag{1}

Equation (1) is a second-order linear difference equation in the wealth level K. Equations like this abound in probability and finance theory, as they capture the dynamics of quantities like probabilities, expectations or values. Solutions to equations like (1) are typically of the form f(K) = z^{K} where z is a real or complex number. If we try with f(K) = z^{K} in equation (1) we find z^{K} = \frac{1}{2} z^{K - 1} + \frac{1}{2} z^{K + 1}, so that z^{2} - 2 z + 1 = (z - 1)^{2} = 0. The only solution to the second order polynomial is 1, so we know that a constant is a solution. But there should be two solutions, and that is in general what should happen to a second order polynomial. When there is a repeated root, we can always try with K z^{K} which we can verify is also a solution. The general solution to equation (1) is then f(K) = \alpha + \beta K. \tag{2}

To solve for \alpha and \beta in equation (2) we just use the fact that f(-B) = 0 and f(A) = 1, so that \begin{aligned} \alpha - \beta B & = 0, \\ \alpha + \beta A & = 1. \end{aligned} This implies that \alpha = \frac{B}{A + B} and \beta = \frac{1}{A + B} so that f(K) = \frac{B}{A + B} + \frac{K}{A + B}.

We can finally conclude that P(S_{\tau} = A \mid S_{0} = 0) = f(0) = \frac{B}{A + B}. Therefore, the probability of making $10 before losing $10 is just 10 / 20 = 1 / 2, and the probability of making $10 before losing $5 is 5 / 15 = 1 / 3.

Solving for the Expectation

We now want to solve for how long does it take on average to either win $A or lose $B. That is, we want to compute \operatorname{E}(\tau \mid S_{0} = 0). For this, like before we define g(K) = \operatorname{E}(\tau \mid S_{0} = K), and note that g(-B) = g(A) = 0. We can then work the recursion as follows. If we are currently at S_{0} = K, and we play once more, the expected time to reach the target or lose it all has increased by one. If we win it’s the same problem with either one extra or one less unit of wealth, each happening with probability 1 / 2. Thus, g(K) = 1 + \frac{1}{2} g(K - 1) + \frac{1}{2} g(K + 1). \tag{3}

To solve for (3), we already have the homogenous solution computed earlier. We can find a particular solution by trying \gamma K^{2} which is the next power of K. Therefore, \gamma K^{2} = 1 + \frac{\gamma}{2} (K - 1)^{2} + \frac{\gamma}{2} (K + 1)^{2}, or K^{2} = \frac{1}{\gamma} + \frac{1}{2} (K^{2} - 2 K + 1) + \frac{1}{2} (K^{2} + 2 K + 1). Thus, we find that 1 / \gamma + 1 = 0 or \gamma = - 1. The general solution to (3) is then g(K) = \alpha + \beta K - K^{2}. \tag{4} We solve for (4) by using \begin{aligned} \alpha - \beta B - B^{2} & = 0, \\ \alpha + \beta A - A^{2} & = 0. \end{aligned} We find that \beta (A + B) = A^{2} - B^{2} = (A + B) (A - B), so \beta = A - B and \alpha = (A - B) B + B^{2} = AB.

Thus, we conclude that g(K) = AB + (A - B) K - K^{2}, and \operatorname{E}(\tau \mid S_{0} = 0) = g(0) = AB.