12. Modeling Stock Prices in Continuous-Time#

12.1. Stochastic Processes#

A stochastic process describes the evolution of a random variable over time. In finance we use stochastic processes to model the evolution of stock prices, interest rates, volatility, foreign exchange rates, commodity prices, etc. We distinguish between:

  • Discrete-time processes: The values of the process \(\left\{ S_{n} \right\}\) are allowed to change only at discrete time intervals, i.e. \(n \in \{ 0, 1, 2, \ldots, N \} \) or \(n \in \mathbb{N}.\)

  • Continuous-time processes: The stochastic process \(\left\{ S_{t} \right\}\) is defined for all \(t \in [0, T].\)

We wil now consider several stochastic processes that are commonly found in the finance literature. We start by understanding discrete-time processes and then we extend the analysis to include continuous-time processes. The analysis is mainly informal, as the theory of stochastic process in continuous-time requires the use of advanced mathematical concepts, which is beyond the scope of these notes.

12.1.1. Random Walks#

A one-dimensional random walk \(\left\{ X_{n} \right\}\) is a stochastic process defined as:

\[\begin{align*} X_{0} & = x_{0}, \\ X_{n + 1} & = X_{n} + e_{n + 1}, \\ \end{align*}\]

where \(\left\{ e_{n} \right\}\) are independent and identically distributed (i.i.d.) random variables such that \(\ev(e_{n}) = 0\) for all \(n \geq 1.\) Note that \(e_{n}\) need not be normally distributed. For example, \(e_{n}\) could be such:

\[\begin{align*} \prob(e_{n} = 1) = \prob(e_{n} = -1) = 0.5. \end{align*}\]
_images/random_walk.svg

Fig. 12.1 The figure plots simulated paths for the random walk defined as \(X_{0}= 0,\) \(X_{n + 1} = X_{n} + e_{n + 1},\) where \(\{e_{n}\}\) is an i.i.d sequence taking the values \(1\) and \(-1\) with equal probability, and \(n \leq 5000.\)#

We can see that the different paths diverge as \(n\) grows larger. Indeed, we have that:

\[\begin{align*} X_{n} & = X_{n-1} + e_{n} \\ & = X_{n-2} + e_{n-1} + e_{n} \\ & \vdotswithin{=} \\ & = X_{0} + e_{1} + \cdots + e_{n-1} + e_{n} \\ & = X_{0} + \sum_{i=1}^{n} e_{i} \\ \end{align*}\]

Denoting \(\var(e_{n}) = \sigma^{2}\), and since we have that \(\{e_{n}\}\) are independent, we have that:

\[\begin{equation*} \var(X_{n}) = n \sigma^{2}. \end{equation*}\]

Therefore, the variance of \(X_{n}\) increases linearly with \(n\) as \(n \rightarrow \infty.\)

12.1.2. Martingales#

A martingale is a closely related process to the random walk but slighty more general. A discrete-time martingale \(\left\{Z_{n}\right\}_{n \geq 0}\) is a stochastic process such that:

\[\begin{equation*} \ev\left(Z_{n+1} \;\middle|\; Z_{1}, Z_{2}, \ldots, Z_{n}\right) = Z_{n}. \end{equation*}\]

Intuitively, the history of the process \(\{Z_{n}\}\) is irrelevant to forecast \(Z_{n+1}\). The only thing that matters is the current value of \(Z_{n}\). A random walk is of course a martingale, but note that a martingale need not be a random walk. For example, consider the process \(\left\{ Z_{n} \right\}\):

\[\begin{equation*} Z_{n+1} = Z_{n} \varepsilon_{n+1}, \end{equation*}\]

where \(\left\{ \varepsilon_{n} \right\}\) is an i.i.d. sequence such that \(\ev\left(\varepsilon_{n}\right) = 1\) for all \(n \geq 1\). It is a martingale since:

\[\begin{align*} \ev\left(Z_{n+1} \;\middle|\; Z_{1}, Z_{2}, \ldots, Z_{n}\right) & = \ev\left(Z_{n} \varepsilon_{n+1} \;\middle|\; Z_{n}\right), \\ & = Z_{n} \ev\left(\varepsilon_{n+1} \;\middle|\; Z_{n}\right), \\ & = Z_{n}. \\ \end{align*}\]

12.1.3. Wiener Processes#

A very useful random walk can be defined as follows:

\[\begin{equation*} W_{t + \Delta t} = W_{t} + \sqrt{\Delta t} e_{t + \Delta t} \end{equation*}\]

where \(W_{0} = 0\) and \(\left\{ e_{t} \right\}\) are i.i.d. such that \(e_{t} \sim N(0, 1)\). Note that here time increases each step by \(\Delta t\). Letting \(\Delta t \rightarrow 0,\) the resulting process \(\left\{ W_{t} \right\}\) for \(t \in [0, T]\) is called a Wiener process or Brownian motion.

The Wiener process has the following properties:

  • The sample paths are continuous.

  • For \(s < t\), the increment \(W_{t} - W_{s} \sim N(0, t - s)\), i.e. is normally distributed with mean \(0\) and variance \(t - s.\)

  • Increments are independent of each other.

  • In particular, note that \(W_{t} \sim N(0, t)\) for \(0 < t \leq T\).

_images/wiener_process.svg

Fig. 12.2 The figure plots simulated paths for \(\left\{W_{t}\right\}\) where \(t \in [0, 10]\).#

12.1.4. Geometric Brownian Motion#

Now we turn our attention to modeling stock prices \(\left\{ S_{t} \right\}\). We need to be careful, though, as stock prices cannot be negative. We also would like to allow the model to display a certain drift \(\mu\) and volatility \(\sigma\).

To achieve this, we model the percentage change of a stock price between \(t\) and \(t + \Delta t\) as:

\[\begin{equation*} \frac{\Delta S_{t}}{S_{t}} = \mu \Delta t + \sigma \Delta W_{t} \end{equation*}\]

Note that the percentage change in price over an interval \(\Delta t\) is normally distributed with mean \(\mu \Delta t\) and variance \(\sigma^{2} \Delta t\). Letting \(\Delta t \rightarrow 0\), the resulting process \(\left\{ S_{t} \right\}\) for \(t \in [0, T]\) is called a geometric Brownian motion (GBM).

_images/geometric_brownian.svg

Fig. 12.3 The figure plots simulated paths for a geometric Brownian motion \(\left\{S_{t}\right\}\) where \(t \in [0, 10]\), \(S_{0} = 100\), \(\mu = 0.20\), and \(\sigma = 0.20\). The dashed line denotes \(\ev\left( S_{t} \right) = S_{0} e^{\mu t}\).#

12.2. Stochastic Calculus#

Once we have defined how \(S_{t}\) behaves over time, we now turn our attention to model how a function of \(S_{t}\) behaves over time. The reason why we are interested in this is because we want to find a way to price derivatives as a function of the relevant state variables. We will see later that when the stock price is driven by a single source of uncertainty, then the value of a call or put option depends only on the stock price itself and time-to-maturity, i.e. the price of the derivative when the stock price is \(S\) and the time-to-maturity is \(T\) will be of the form \(F(S, T)\).

We will start studying how \(X_{t} = F(S_{t})\) behaves over time and we will add later the time dimension to the problem. In what follows we assume that \(F(\cdot)\) is a smooth function such that its first and second derivatives exist.

12.2.1. Ito’s Lemma#

Remember that the Wiener process increment is defined as:

\[\begin{equation*} \Delta W_{t} = W_{t + \Delta t} - W_{t} = \sqrt{\Delta t} e_{t + \Delta t}. \end{equation*}\]

Consider a GBM process \(\left\{S_{t}\right\}\) and a smooth function \(F(\cdot)\). A second order Taylor approximation around \(S_{t}\) implies:

\[\begin{equation*} F(S_{t} + \Delta S_{t}) \approx F(S_{t}) + F'(S_{t}) (\Delta S_{t}) + \frac{1}{2} F''(S_{t}) (\Delta S_{t})^{2} \end{equation*}\]

Using the results derived in the Appendix, we have that:

\[\begin{align*} (\Delta S_{t})^{2} & = (\mu S_{t} \Delta t + \sigma S_{t} \Delta W_{t})^{2} \\ & = (\mu S_{t})^{2} \underbrace{(\Delta t)^{2}}_{\approx 0} + 2 \mu \sigma (S_{t})^{2} \underbrace{(\Delta t)(\Delta W_{t})}_{\approx 0} + (\sigma S_{t})^{2} \underbrace{(\Delta W_{t})^{2}}_{\approx \Delta t} \\ & \approx \sigma^{2} S_{t}^{2} \Delta t \end{align*}\]

We can finally conclude that:

\[\begin{equation*} \Delta F(S_{t}) \approx \left( \mu S_{t} F'(S_{t}) + \frac{1}{2} \sigma^{2} S_{t}^{2} F''(S_{t}) \right) \Delta t + \sigma S_{t} F'(S_{t}) \Delta W_{t} \end{equation*}\]

The continuous-time analog of the previous analysis is as follows.

Property 12.1 (Ito’s Lemma for GBM)

Consider a GBM process \(\left\{S_{t}\right\}\) given by:

(12.1)#\[\begin{equation} dS = \mu S dt + \sigma S dW \label{gbm} \end{equation}\]

and a twice-differentiable function \(F(S)\). Then we have that:

\[\begin{equation*} dF = \left( \mu S F'(S) + \frac{1}{2} \sigma^{2} S^{2} F''(S) \right) dt + \sigma S F'(S) dW \end{equation*}\]

It is usually more convenient to use the box calculus when working with stochastic processes defined through Brownian motions.

Property 12.2 (Box Calculus)

Consider the GBM process \(\left\{S_{t}\right\}\) defined in \(\eqref{gbm}\). The box calculus rules for Ito processes are:

\[\begin{align*} (dt)^{2} & = 0 \\ (dt)(dW) & = (dW)(dt) = 0 \\ (dW)^{2} & = dt \\ \end{align*}\]

Furthermore, denote \(F_{S} = F'(S)\) and \(F_{SS} = F''(S).\) Ito’s Lemma can then be restated as:

\[\begin{equation*} dF = F_{S} dS + \frac{1}{2} F_{SS} (dS)^{2} \end{equation*}\]

where

\[\begin{equation*} (dS)^{2} = (\mu S dt + \sigma S dW)^{2} = \sigma^{2} S^{2} dt \end{equation*}\]

12.2.2. Solving for GBM#

Define \(X = \ln(S)\), which implies \(S = e^{X}\). We have that \(F_{S} = 1/S\) and \(F_{SS} = -1/S^{2}\), which implies that:

\[\begin{align*} dX & = F_{S} dS + \frac{1}{2} F_{SS} (dS)^{2} \\ & = \frac{1}{S} \left( \mu S dt + \sigma S dW \right) + \frac{1}{2} \left( -\frac{1}{S^{2}} \right) \sigma^{2} S^{2} dt \\ & = \left( \mu dt + \sigma dW \right) - \frac{1}{2} \sigma^{2} dt \\ & = \left( \mu - \frac{1}{2} \sigma^{2} \right) dt + \sigma dW \\ \end{align*}\]

We can then solve for \(X_{T}\):

\[\begin{align*} X_{T} - X_{0} & = \int_{0}^{T} dX = \int_{0}^{T} \left( \mu - \frac{1}{2} \sigma^{2} \right) dt + \int_{0}^{T} \sigma dW \\ & = \left( \mu - \frac{1}{2} \sigma^{2} \right) T + \sigma W_{T} \\ \end{align*}\]

and conclude that:

(12.2)#\[\begin{equation} S_{T} = S_{0} \exp\left( \left( \mu - \frac{1}{2} \sigma^{2} \right) T + \sigma W_{T} \right) \label{gbm_solution} \end{equation}\]

12.3. Properties of Stock Prices Following a GBM#

Equation \(\eqref{gbm_solution}\) can be rewritten as:

\[\begin{equation*} \ln(S_{T}) = \ln(S_{0}) + \left( \mu - \frac{1}{2} \sigma^{2} \right) T + \sigma W_{T} \end{equation*}\]

We can conclude that \(\ln(S_{T}) \sim N(m, s^{2})\), where:

\[\begin{align*} m & = \ln(S_{0}) + \left( \mu - \frac{1}{2} \sigma^{2} \right) T \\ s & = \sigma \sqrt{T} \end{align*}\]

In other words, \(S_{T}\) is lognormally distributed with mean \(m\) and variance \(s^{2}\).

Example 12.1

Consider a stock whose price at time \(t\) is given by \(S_{t}\) and that follows a GBM. The expected return is 12% per year and the volatility is 25% per year. The current spot price is $25. If we denote \(X_{T} = \ln(S_{T})\) and take \(T = 0.5\), we have that:

\[\begin{align*} \ev(X_{T}) & = \ln(25) + \left(0.12 - 0.5(0.25)^{2}\right)(0.5) = 3.2633 \\ \stdev(X_{T}) & = 0.25 \sqrt{0.5} = 0.1768 \\ \end{align*}\]

Hence, the 95% confidence interval for \(S_{T}\) is given by:

\[\begin{equation*} [e^{3.2633 - 1.96(0.1768)}, e^{3.2633 + 1.96(0.1768)}] = [18.48, 36.96] \end{equation*}\]

Therefore, there is a 95% probability that the stock price in 6 months will lie between $18.48 and $36.96.

12.3.1. Moments of the Stock Price#

The fact that the stock price at time \(T\) is log-normally distributed allows us to compute the mean and standard deviation of \(S_{T}\).

Property 12.3 (Moments of the Stock Price)

The expectation and standard deviation of \(S_{T}\) are given by:

\[\begin{align*} \ev(S_{T}) & = S_{0} e^{\mu T} \\ \stdev(S_{T}) & = E(S_{T}) \sqrt{e^{\sigma^{2} T} - 1} \\ \end{align*}\]

Therefore, the expected stock price grows at a rate \(\mu\). The variance of \(S_{T}\), however, is large and increases exponentially with time.

Example 12.2

Consider a stock whose price at time \(t\) is given by \(S_{t}\) and that follows a GBM. The expected return is 12% per year and the volatility is 25% per year. The current spot price is $25. The expected price and standard deviation 6 months from now are:

\[\begin{align*} \ev(S_{T}) & = 25 e^{0.12 (0.5)} = \$26.55 \\ \stdev(S_{T}) & = 26.55 \sqrt{e^{0.25^{2} (0.5)} - 1} = \$4.73 \\ \end{align*}\]

12.3.2. Computing Partial Expectations#

Since \(\ln(S_{T}) \sim \Normal(m, s^{2})\), we can use Property 11.4 introduced in Statistics Preliminaries to show the following property.

Property 12.4 (Partial Expectations of the Stock Price)

Consider a non-dividend paying stock that follows a GBM as defined in \(\eqref{gbm}\). Then we have that:

\[\begin{align*} \ev\left(S_{T} \1{S_{T} > K}\right) & = S_{0} e^{\mu T} \cdf\left( \frac{\ln(S_{0}/K) +(\mu + \frac{1}{2} \sigma^{2})T}{\sigma \sqrt{T}} \right) \\ \ev\left(K \1{S_{T} > K}\right) & = K \cdf\left( \frac{\ln(S_{0}/K) +(\mu - \frac{1}{2} \sigma^{2})T}{\sigma \sqrt{T}} \right) \\ \end{align*}\]

It turns out that these results are everything we need in order to derive the Black-Scholes pricing formulas!

12.4. A Generalized Form of Ito’s Lemma#

Most derivatives not only depend on the underlying asset but also depend on time since they have fixed expiration dates. The analysis we did before for Ito’s Lemma generalizes easily to handle this case. Consider a non-dividend paying stock that follows a GBM:

\[\begin{align*} dS = \mu S dt + \sigma S dW \end{align*}\]

and a smooth function \(F(S, t)\). Ito’s Lemma in this case applies in the following form:

\[\begin{equation*} dF = F_{S} dS + \frac{1}{2} F_{SS} (dS)^{2} + F_{t} dt \end{equation*}\]

where \((dS)^{2} = \sigma^{2} S^{2} dt\).

12.5. Appendix#

Remember that we defined the Brownian motion or Wiener process as a random walk driven by normally distributed shocks:

\[\begin{equation*} W_{t + \Delta t} = W_{t} + \sqrt{\Delta t} e_{t + \Delta t}, \end{equation*}\]

where \(\{e_{t}\}\) is an i.i.d. sequence of random variables distributed \(\Normal(0, 1).\)

Let’s start by splitting the interval \([0, T]\) into \(n\) intervals of length \(\Delta t = t_{i+1} - t_{i}.\)

Figure made with TikZ

Note that \(t_{i} = i \Delta t\) and \(T = t_{n} = n \Delta t.\) The Brownian motion increments are then defined as \(\Delta W_{t_{i}} = W_{t_{i+1}} - W_{t_{i}}.\)

The first question one might have is why using normally distributed increments. There are two answers for that. First, a sum of normally distributed random variables is also normal and in this case we have:

\[\begin{equation*} W_{T} - W_{0} = \sum_{i=0}^{n-1} \Delta W_{t_{i}} = \sum_{i=0}^{n-1} \sqrt{\Delta t} e_{t + \Delta t}. \end{equation*}\]

The variance of \(\sum_{i=0}^{n-1} \sqrt{\Delta t} e_{t + \Delta t}\) is given by \(\sum_{i=0}^{n-1} \Delta t = n \Delta t = T\), which implies that \(W_{T} \sim \Normal(0, T)\). So by using normally distributed increments we guarantee that the resulting process for Brownian motion is also normal.

Second, imagine that we use a different distribution for the i.i.d. increments while still requiring \(\ev(e_{t}) = 0\) and \(\var(e_{t}) = 1\). For example, \(e_{t}\) could take the values \(1\) and \(-1\) with equal probability. Nevertheless, the central limit theorem guarantees that:

\[\begin{equation*} \sqrt{n} \left( \frac{1}{n} \sum_{i=0}^{n-1} \sqrt{\Delta t} e_{t + \Delta t} \right) \xrightarrow[]{d} \Normal(0, \Delta t). \end{equation*}\]

In other words, even if we use a different distribution for the increments, as \(n \rightarrow \infty\) we have that \(W_{T} \sim \Normal(0, T).\) Therefore, there is no loss in generality in assuming normally distributed increments for the Brownian motion.

A second question that one might have, and one of the most puzzling facts in stochastic calculus in my opinion, is the fact that when we apply Ito’s lemma we use the fact that \((dW_{t})^{2} = dt.\) Clearly, \((\Delta W_{t})^{2} = \Delta t e_{t}^{2} \neq \Delta t\) where \(e_{t} \sim \Normal(0, 1).\) Indeed, if \(\Delta W_{t}\) is random, then \((\Delta W_{t})^{2}\) must also be random. However, we will see in a moment that it is fine to say that \((\Delta W_{t})^{2} \approx \Delta t\) as \(\Delta t \rightarrow 0.\)

Let’s start by computing the mean and variance of \((\Delta W_{t})^{2}\):

\[\begin{align*} \ev\left[(\Delta W_{t})^{2}\right] & = \Delta t \\ \var\left[(\Delta W_{t})^{2}\right] & = \ev\left[(\Delta W_{t})^{4}\right] - \left(\ev\left[(\Delta W_{t})^{2}\right]\right)^{2} \\ & = 3 (\Delta t)^{2} - (\Delta t)^{2} \\ & = 2 (\Delta t)^{2}. \\ \end{align*}\]

In computing the variance of \((\Delta W_{t})^{2}\) we used the fact that if \(X \sim \Normal(0, \sigma^{2}),\) then \(\ev(X^{4}) = 3\sigma^{4}.\) Since \(\Delta W_{t} \sim \Normal(0, \Delta t),\) we have that \(\ev\left[(\Delta W_{t})^{4}\right] = 3 (\Delta t)^{2}.\)

Consider now the following sum:

\[\begin{equation*} S_{n} = \sum_{i=0}^{n-1} (\Delta W_{t_{i}})^{2}. \end{equation*}\]

Clearly, \(S_{n}\) is a sum of \(n\) independent random variables so its variance is the sum of the variance of each \(\Delta W_{t}:\)

\[\begin{align*} \ev(S_{n}) & = n \Delta t = T \\ \var(S_{n}) & = n (2 (\Delta t)^{2}) = \frac{2 T^{2}}{n}. \\ \end{align*}\]

Since \(\lim_{n \rightarrow \infty} \var(S_{n}) = 0,\) we have that \(S_{n} \rightarrow T\) as \(n \rightarrow \infty\) in probability. Intuitively, the previous result is really the weak-law of large numbers since we can re-write it as \(\frac{S_{n}}{n} \rightarrow \Delta t\) as \(n \rightarrow \infty\) in probability. However, when you apply the weak-law of large numbers to an arbitrary sequence of i.i.d. random variables, you cannot say that you can approximate each random variable by its mean just because its average converges to their mean. In our case, since the variance of \((\Delta W_{t})^{2}\) is so small compared to its mean, we can safely say that \((\Delta W_{t})^{2}\) behaves as if \((\Delta W_{t})^{2} = \Delta t\) as \(n \rightarrow \infty\). In other words, we have that \((\Delta W_{t})^{2} \approx \Delta t\) for small \(\Delta t.\)

We can apply the same analysis to study the behavior of \((\Delta t)(\Delta W_{t})\) as \(\Delta t \rightarrow 0.\) Since:

\[\begin{align*} \ev\left[(\Delta t)(\Delta W_{t})\right] & = 0 \\ \var\left[(\Delta t)(\Delta W_{t})\right] & = \ev\left[((\Delta t)(\Delta W_{t}))^{2}\right] - (\ev[(\Delta t)(\Delta W_{t})])^{2} \\ & = (\Delta t)^{2} \ev[(\Delta W_{t})^{2}] - ((\Delta t) \ev[\Delta W_{t}])^{2} \\ & = (\Delta t)^{3}. \\ \end{align*}\]

Consider now the following sum:

\[\begin{equation*} C_{n} = \sum_{i=0}^{n-1} (\Delta t) (\Delta W_{t_{i}}). \end{equation*}\]

The mean and variance of \(U_{n}\) are given by:

\[\begin{align*} \ev(C_{n}) & = 0 \\ \var(C_{n}) & = \frac{T^{3}}{n^{2}}. \\ \end{align*}\]

Since \(\lim_{n \rightarrow \infty} \var(C_{n}) = 0,\) we have that \(C_{n} \rightarrow 0\) as \(n \rightarrow \infty\) in probability, implying that \((\Delta t)(\Delta W_{t}) \approx 0\) for small \(\Delta t.\)

12.6. Practice Problems#

Exercise 12.1

Consider a stock whose price at time \(t\) is given by \(S_{t}\) and that follows a geometric Brownian motion (GBM). The expected return is 18% per year and the volatility is 32% per year. The current spot price is $60.

  1. Compute the expected price 9 months from now.

  2. Compute the mean and standard deviation of the log-spot price 9 months from now.

  3. Compute the 95% confidence interval of \(\ln(S_{T})\) 9-months from now, and report the corresponding values for \(S_{T}\).

Exercise 12.2

Consider a stock whose price at time \(t\) is given by \(S_{t}\) and that follows a GBM. The expected return is 11% per year and the volatility is 27% per year. The current spot price is $60.

  1. Compute the expected price of \(S_{t}\) 1 year from now.

  2. Compute the expected price of \(X_{t}=1/S_{t}\) year from now.

Exercise 12.3

Consider a stock whose price at time \(t\) is given by \(S_{t}\) and that follows a GBM. The expected return is 12% per year and the volatility is 35% per year. The current spot price is $55. Let \(T=18\) months.

  1. Compute \(\ev(S_{T})\).

  2. Compute the mean and standard deviation of the log-spot price at \(T\).

  3. Find \(C\) such that \(\prob(S_{T} \leq C)=0.01\).

Exercise 12.4

Consider a stock whose price at time \(T\) is given by \(S_{T}\) and that follows a GBM, i.e.,

\[\begin{equation*} \ln(S_{T}) \sim \Normal(\ln(S_{0})+(\mu-0.5\sigma^{2})T, \sigma^{2} T). \end{equation*}\]

The expected return is 12% per year and the volatility is 35% per year. The current spot price is $100.

  1. Compute the expected price in 2 years from now.

  2. Compute the mean and standard deviation of the log-spot price in 2 years from now.

  3. Compute the probability that the spot price is less than $100 in 2 years from now.

  4. Compute the probability that the spot price is greater than $120 in 2 years from now.

Exercise 12.5

Suppose that the stock price follows a geometric Brownian motion (GBM) with drift \(\mu\) and instantaneous volatility \(\sigma\), i.e.,

\[\begin{equation*} dS = \mu S dt + \sigma S dW. \end{equation*}\]

Show that \(Y = S^{\alpha}\) also follow a GBM and determine the drift and volatility as a function of \(\mu\), \(\sigma\), and \(\alpha\).

Exercise 12.6

Suppose that the stock price follows a geometric Brownian motion (GBM) with drift \(\mu\) and instantaneous volatility \(\sigma\). Show that \(Y = S e^{-\mu t}\) also follow a GBM and determine the drift and volatility as a function of \(\mu\) and \(\sigma\).

Exercise 12.7

Suppose that the stock price follows a geometric Brownian motion (GBM) with drift \(r\) and instantaneous volatility \(\sigma\), where \(r\) is the risk-free rate. Consider the futures price of \(S\) at time \(t\) and expiring at \(T\), given by \(f = S e^{r (T -t)}\). Show that \(f\) has zero drift and hence is a martingale.

Exercise 12.8

Suppose that the stock price follows a geometric Brownian motion (GBM) with drift \(\mu = 10\%\) and instantaneous volatility \(\sigma = 25\%\). Compute \(\ev(S_{T} \1{S_{T} > K})\) and \(\ev(\1{S_{T} > K}) = \prob(S_{T} > K)\) if \(S_{0} = 100\), \(K = 95\) and \(T = 2\).