The Lognormal Distribution

The Normal Distribution

We say that a real-valued random variable (RV) X is normally distributed with mean \mu and standard deviation \sigma if its probability density function (PDF) is: f(x) = \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - \mu)^{2}}{2 \sigma^{2}}} and we usually write X \sim \mathcal{N}(\mu, \sigma^{2}). The parameters \mu and \sigma are related to the first and second moments of X.

Figure 1: The figure shows the density function of a normally distributed random variable with mean \mu and standard deviation \sigma.
Moments of the Normal Distribution

The parameter \mu is the mean or expectation of X while \sigma denote its standard deviation. The variance of X is given by \sigma^{2}.

Proof

Let X = \mu + \sigma Z where Z \sim \mathcal{N}(0, 1). Start by defining f(z) = e^{-\frac{1}{2} z^{2}}, which implies that f^{\prime}(z) = -z e^{-\frac{1}{2} z^{2}} and f^{\prime \prime}(x) = z^{2} e^{-\frac{1}{2} z^{2}} - e^{-\frac{1}{2} z^{2}}. We can then write: \begin{aligned} z e^{-\frac{1}{2} z^{2}} & = -f^{\prime}(z) \\ z^{2} e^{-\frac{1}{2} z^{2}} & = f^{\prime \prime}(x) + f(z) \end{aligned}

Then, \begin{aligned} \operatorname{E}(Z) & = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi}} z e^{-\frac{1}{2} z^{2}} \, dz \\ & = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} -f^{\prime}(z) \, dz\\ & = \frac{1}{\sqrt{2 \pi}} \left( \left. -f(z) \vphantom{-e^{-\frac{1}{2} z^{2}}} \right|_{-\infty}^{\infty} \right) \\ & = 0, \\ \operatorname{E}(Z^{2}) & = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi}} z^{2} e^{-\frac{1}{2} z^{2}} \, dz \\ & = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f^{\prime \prime}(x) + f(z) \, dz \\ & = \frac{1}{\sqrt{2 \pi}} \left( \left. f^{\prime}(z) \vphantom{-z e^{-\frac{1}{2} z^{2}}} \right|_{-\infty}^{\infty} + \int_{-\infty}^{\infty} f(z) \, dz \right) \\ & = \frac{1}{\sqrt{2 \pi}} (0 + \sqrt{2 \pi}) \\ & = 1, \\ \operatorname{Var}(Z) & = \operatorname{E}(Z^{2}) - \operatorname{E}(Z)^{2} \\ & = 1. \end{aligned} Note that we used the fact that \int_{-\infty}^{\infty} f(z) \, dz = \sqrt{2 \pi}.

We can now compute \operatorname{E}(X) = \mu + \sigma \operatorname{E}(Z) = \mu and \operatorname{Var}(X) = \sigma^{2} \operatorname{Var}(Z) = \sigma^{2}.

As with any real-valued random variable X, in order to compute the probability that X \leq x we need to integrate the density function from -\infty to x \colon \operatorname{P}(X \leq x) = \int_{-\infty}^{x} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(u - \mu)^{2}}{2 \sigma^{2}}} du.

The function F(x) = \operatorname{P}(X \leq x) is called the cumulative distribution function of X. The Leibniz integral rule implies that F^{\prime}(x) = f(x).

The Standard Normal Distribution

An important case of normally distributed random variables is when \mu = 0 and \sigma = 1. In this case we say that Z \sim \mathcal{N}(0, 1) has the standard normal distribution and its cumulative distribution function is usually denoted by the capital Greek letter \Phi (phi), and is defined by the integral: \mathop{\Phi}(z) = \operatorname{P}(Z \leq z) = \int_{-\infty}^{z} \frac{1}{\sqrt{2 \pi}} e^{-\frac{x^{2}}{2}} \, dx.

Figure 2: The blue shaded area represents \mathop{\Phi}(z).

Since the integral cannot be solved in closed-form, the probability must then be obtained from a table or using a computer. For example, in R we can compute \mathop{\Phi}(-0.4) by typing the following:

pnorm(-0.4)
[1] 0.3445783

If you prefer to use Excel, you need to type in a cell =norm.s.dist(-0.4,TRUE), which yields the same answer.

Left-Tail Probability

Knowing how to compute or approximate \mathop{\Phi}(z) allows us to compute \operatorname{P}(X \leq x) when X \sim \mathcal{N}(\mu, \sigma^{2}) since Z = \frac{X - \mu}{\sigma} \sim \mathcal{N}(0, 1) \colon \begin{aligned} \operatorname{P}(X \leq x) & = \operatorname{P}\left( \frac{X - \mu}{\sigma} \leq \frac{x - \mu}{\sigma} \right) \\ & = \operatorname{P}\left( Z \leq \frac{x - \mu}{\sigma} \right) \\ & = \mathop{\Phi}\left(\frac{x - \mu}{\sigma}\right) \end{aligned} where Z = \dfrac{X - \mu}{\sigma} \sim \mathcal{N}(0, 1) is called a Z-score.

Example 1 Suppose that X \sim \mathcal{N}(\mu, \sigma^{2}) with \mu = 10 and \sigma = 25. What is the probability that X \leq 0? \begin{aligned} \operatorname{P}(X \leq 0) & = \operatorname{P}\left( Z \leq \tfrac{0 - 10}{25} \right) \\ & = \mathop{\Phi}(-0.40) \\ & = 0.3446. \end{aligned}

Right-Tail Probability

For a random variable X, the right-tail probability is defined as \operatorname{P}(X > x). Since \operatorname{P}(X \leq x) + \operatorname{P}(X > x) = 1, we have that: \operatorname{P}(X > x) = 1 - \operatorname{P}(X \leq x).

Figure 3: The right-tail probability is the probability of the whole distribution, which is one, minus the left-tail probability.

Example 2 Suppose that X \sim \mathcal{N}(\mu, \sigma^{2}) with \mu = 10 and \sigma = 25. What is the probability that X > 12? \begin{aligned} \operatorname{P}(X \leq 12) & = \operatorname{P}\left( Z \leq \tfrac{12 - 10}{25} \right) \\ & = \mathop{\Phi}(0.08) \\ & = 0.5319. \end{aligned} Therefore, \operatorname{P}(X > 12) = 1 - 0.5319 = 0.4681.

Interval Probability

The probability that a random variable X falls within an interval (X_{1}, X_{2}] is given by \operatorname{P}(x_{1} < X \leq x_{2}) = \operatorname{P}(X \leq x_{2}) - \operatorname{P}(X \leq x_{1}).

Figure 4: If you subtract the area to the left of x_{1} to the area that is to the left of x_{2} you obtain the probability of x_{1} < X \leq x_{2}.

Example 3 Suppose that X \sim \mathcal{N}(\mu, \sigma^{2}) with \mu = 10 and \sigma = 25. What is the probability that 2 < X \leq 14? \begin{aligned} \operatorname{P}(X \leq 14) & = \operatorname{P}\left( Z \leq \tfrac{14 - 10}{25} \right) \\ & = \mathop{\Phi}(0.16) \\ & = 0.5636, \\ \operatorname{P}(X \leq 2) & = \operatorname{P}\left( Z \leq \tfrac{2 - 10}{25} \right) \\ & = \mathop{\Phi}(-0.32) \\ & = 0.3745. \end{aligned} Therefore, \operatorname{P}(2 < X \leq 14) = 0.5636 - 0.3745 = 0.1891.

The Lognormal Distribution

If X \sim \mathcal{N}(\mu, \sigma^{2}), then Y = e^{X} is said to be lognormally distributed with the same parameters. The pdf of a lognormally distributed random variable Y can be obtained from the pdf of X.

Figure 5: The figure shows the difference between a normal and a lognormal PDF with the same parameters.
Lognormal Density

If Y is lognormally distributed with parameters \mu and \sigma^{2}, the PDF of Y is given by: f(y) = \frac{1}{y \sqrt{2 \pi \sigma^{2}}} e^{-\frac{(\ln(y) - \mu)^{2}}{2 \sigma^{2}}}.

Proof

Let Y = e^{X} where X = \mu + \sigma Z and Z \sim \mathcal{N}(0, 1). Then, \begin{aligned} \operatorname{P}(Y \leq y) & = \operatorname{P}(X \leq \ln(y)) \\ & = \int_{-\infty}^{\ln(y)} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - \mu)^{2}}{2 \sigma^{2}}} \, dx. \end{aligned}

Let’s define z = e^{x}. This implies that x = \ln(z), which in turn implies that dx = (1 / z) dz. Therefore, \operatorname{P}(Y \leq y) = \int_{-\infty}^{y} \frac{1}{z \sqrt{2 \pi \sigma^{2}}} e^{-\frac{(\ln(z) - \mu)^{2}}{2 \sigma^{2}}} \, dz.

Thus, the integrand of the previous expression is the probability density function of Y.

Unlike the normal density, the lognormal density function is not symmetric around its mean. Normally distributed variables can take values in (-\infty, \infty), whereas lognormally distributed variables are always positive.

Computing Probabilities

We can use the fact that the logarithm of a lognormal random variable is normally distributed to compute cumulative probabilities.

Example 4 Let Y = e^{4 + 1.5 Z} where Z \sim \mathcal{N}(0, 1). What is the probability that Y \leq 100? \begin{aligned} \operatorname{P}(Y \leq 100) & = \operatorname{P}(e^{X} \leq 100) \\ & = \operatorname{P}(X \leq \ln(100)) \\ & = \operatorname{P}\left(Z \leq \tfrac{\ln(100) - 4}{1.5}\right) \\ & = \mathop{\Phi}(0.4034) \\ & = 0.6567 \end{aligned} Therefore, there is a 65.67% chance that Y is less than or equal 100.

Moments

Moments of a Lognormal Distribution

Let Y = e^{\mu + \sigma Z} where Z \sim \mathcal{N}(0, 1). We have that: \begin{aligned} \operatorname{E}(Y) & = e^{\mu + 0.5 \sigma^{2}} \\ \operatorname{Var}(Y) & = e^{2\mu + \sigma^{2}} (e^{\sigma^{2}} - 1) \\ \operatorname{SD}(Y) & = \operatorname{E}(Y) \sqrt{e^{\sigma^{2}} - 1} \end{aligned}

Proof

\begin{aligned} \operatorname{E}(Y) & = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - \mu)^{2}}{2 \sigma^{2}}} e^{x} \, dx \\ & = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - \mu)^{2}}{2 \sigma^{2}} + x} \, dx \\ & = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - (\mu + \sigma^{2}))^{2}}{2\sigma^{2}} + (\mu + 0.5 \sigma^{2})} \, dx \\ & = e^{\mu + 0.5 \sigma^{2}} \underbrace{\int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - (\mu + \sigma^{2}))^{2}}{2\sigma^{2}}} \, dx}_{= 1} \\ & = e^{\mu + 0.5 \sigma^{2}} \end{aligned} Using the fact that \alpha X \sim \mathcal{N}(\alpha \mu, (\alpha \sigma)^{2}), it is also possible to compute the expectation of powers of lognormally distributed variables: \operatorname{E}(Y^{\alpha}) = \operatorname{E}(e^{\alpha X}) = e^{\alpha \mu + 0.5 (\alpha \sigma)^{2}}.

This is useful to compute the variance and standard deviation of Y: \begin{aligned} \operatorname{Var}(Y) & = \operatorname{E}(Y^{2}) - \left(\operatorname{E}(Y)\right)^{2} \\ & = e^{2\mu + 2 \sigma^{2}} - e^{2\mu + \sigma^{2}} \\ & = e^{2\mu + \sigma^{2}} (e^{\sigma^{2}} - 1) \\ \operatorname{SD}(Y) & = \sqrt{\operatorname{Var}(Y)} \\ & = \operatorname{E}(Y) \sqrt{e^{\sigma^{2}} - 1} \end{aligned}

Example 5 Let Y = e^{4 + 1.5 Z} where Z \sim \mathcal{N}(0, 1). The expectation and standard deviation of Y are: \begin{aligned} \operatorname{E}(Y) & = e^{4 + 0.5(1.5^{2})} = 168.17 \\ \operatorname{SD}(Y) & = 168.17 \sqrt{e^{1.5^{2}} - 1} = 489.95 \end{aligned}

Appendix

Percentiles

For a standard normal variable Z, a right-tail percentile is the value z_{\alpha} above which we obtain a certain probability \alpha. Mathematically, this means finding z_{\alpha} such that: \operatorname{P}(Z > z_{\alpha}) = \alpha \Leftrightarrow \operatorname{P}(Z \leq z_{\alpha}) = 1 - \alpha.

Figure 6: The right-tail percentile is the value z_{\alpha} that gives an area to the right equal to \alpha.

This implies that \mathop{\Phi}(z_{\alpha}) = 1 - \alpha, or z_{\alpha} = \mathop{\Phi}^{-1}(1 - \alpha), where \mathop{\Phi}^{-1}(\cdot) denotes the inverse function of \mathop{\Phi}(\cdot). Again, there is no closed-form expression for this function and we need a computer to obtain the values. For example, say that \alpha = 0.025. In R we could compute z_{\alpha} = \mathop{\Phi}^{-1}(0.975) by using the function qnorm as follows:

qnorm(0.975)
[1] 1.959964

In Excel the function =norm.s.inv(0.975) provides the same result.

The following table shows common values for z_{\alpha}:

\boldsymbol{\alpha} \boldsymbol{z_{\alpha}}
0.050 1.64
0.025 1.96
0.010 2.33
0.005 2.58

A (1 - \alpha) two-sided confidence interval (CI) defines left and right percentiles such that the probability on each side is \alpha/2. For a standard normal variable Z, the symmetry of its pdf implies: \operatorname{P}(Z \leq -z_{\alpha/2}) = \operatorname{P}(Z > z_{\alpha/2}) = \alpha/2

Figure 7: The areas on each side are both equal to \alpha/2.

Example 6 Since z_{2.5\%} = 1.96, the 95% confidence interval of Z is [-1.96, 1.96]. This means that if we randomly sample this variable 100,000 times, approximately 95,000 observations will fall inside this interval.

If X \sim \mathcal{N}(\mu, \sigma^{2}), its confidence interval is determined by \xi and \zeta such that: \begin{aligned} & \operatorname{P}(X \leq \xi) = \alpha / 2 \\ & \hspace{0.3in} \Rightarrow \operatorname{P}(Z \leq \tfrac{\xi - \mu}{\sigma}) = \alpha/2, \\ & \operatorname{P}(X > \zeta) = \alpha / 2 \\ & \hspace{0.3in} \Rightarrow \operatorname{P}(Z > \tfrac{\zeta - \mu}{\sigma}) = \alpha/2, \end{aligned} which implies that -z_{\alpha/2} = \tfrac{\xi - \mu}{\sigma} and z_{\alpha/2} = \tfrac{\zeta - \mu}{\sigma}.The (1 - \alpha) confidence interval for X is then [\mu - z_{\alpha/2}\sigma, \mu + z_{\alpha/2}\sigma].

Example 7 Suppose that X \sim \mathcal{N}(\mu, \sigma^{2}) with \mu = 10 and \sigma = 25. Since z_{2.5\%} = 1.96, the 95% confidence interval of X is: [10-1.96(25), 10+1.96(25)] = [-39, 59].

We could also apply the same priciple for a lognormal random variable. Let Y = e^{\mu + \sigma Z} where Z \sim \mathcal{N}(0, 1). We then have that \begin{aligned} & -z_{\alpha/2} < Z \leq z_{\alpha/2} \\ & \hspace{0.4in} \Rightarrow \mu - \sigma z_{\alpha/2} < \mu + \sigma Z \leq \mu + \sigma z_{\alpha/2} \\ & \hspace{0.4in} \Rightarrow e^{\mu - \sigma z_{\alpha/2}} < e^{\mu + \sigma Z} \leq e^{\mu + \sigma z_{\alpha/2}} \end{aligned} The (1 - \alpha) confidence interval for Y (centered araound the mean of \ln(Y)) is [e^{\mu - \sigma z_{\alpha/2}}, e^{\mu + \sigma z_{\alpha/2}}].

Example 8 Let Y = e^{4 + 1.5 Z} where Z \sim \mathcal{N}(0, 1). The 95% confidence interval for Y is: [e^{4 - 1.96(1.5)}, e^{4 + 1.96(1.5)}] = [2.89, 1032.71].

Partial Expectations

When pricing a call option, the payoff is positive if the option is in-the-money and zero otherwise. We usually use an indicator function to quantify this behavior: \large\mathbb{1}_{\{Y > K\}} = \begin{cases} 0 & \text{if $Y \leq K$} \\ 1 & \text{if $Y > K$} \end{cases}

Partial Expectations

Let Y = e^{X} where X \sim \mathcal{N}(\mu, \sigma^{2}). Then we have that: \begin{aligned} \operatorname{E}\left(Y \large\mathbb{1}_{\{Y > K\}}\right) & = e^{\mu + \frac{1}{2}\sigma^{2}} \mathop{\Phi}\left(\frac{\mu + \sigma^{2} - \ln(K)}{\sigma}\right) \\ \operatorname{E}\left(K \large\mathbb{1}_{\{Y > K\}}\right) & = K \mathop{\Phi}\left(\frac{\mu - \ln(K)}{\sigma}\right) \end{aligned}

Proof

The first expectation can be computed as: \begin{aligned} \operatorname{E}\left(Y \large\mathbb{1}_{\{Y > K\}}\right) & = \int_{\ln(K)}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - \mu)^{2}}{2 \sigma^{2}}} e^{x} \, dx \\ & = \int_{-\infty}^{-\ln(K)} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(y + \mu)^{2}}{2 \sigma^{2}}} e^{-y} \, dy \\ & = \int_{-\infty}^{-\ln(K)} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(y + \mu)^{2}}{2 \sigma^{2}} - y} \, dy \\ & = \int_{-\infty}^{-\ln(K)} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(y + (\mu + \sigma^{2}))^{2}}{2\sigma^{2}} + (\mu + 0.5 \sigma^{2})} \, dy \\ & = e^{\mu + 0.5 \sigma^{2}} \int_{-\infty}^{-\ln(K)} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(y + (\mu + \sigma^{2}))^{2}}{2\sigma^{2}}} \, dy \\ & = e^{\mu + 0.5 \sigma^{2}} \mathop{\Phi}\left(\tfrac{\mu + \sigma^{2} - \ln(K)}{\sigma}\right) \end{aligned}

The second expectation yields: \begin{aligned} \operatorname{E}\left(K \large\mathbb{1}_{\{Y > K\}}\right) & = K \int_{\ln(K)}^{\infty} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(x - \mu)^{2}}{2 \sigma^{2}}} \, dx \\ & = K \int_{-\infty}^{-\ln(K)} \frac{1}{\sqrt{2 \pi \sigma^{2}}} e^{-\frac{(y + \mu)^{2}}{2 \sigma^{2}}} \, dy \\ & = K \mathop{\Phi}\left(\tfrac{\mu - \ln(K)}{\sigma}\right) \end{aligned}

Practice Problems

Problem 1 Suppose that X is a normally distributed random variable with mean \mu=12 and standard deviation \sigma=20.

  1. What is the probability that X \leq 0?
  2. What is the probability that X \leq -4?
  3. What is the probability that X > 8?
  4. What is the probability that 4 < X \leq 10?
Solution
  1. \operatorname{P}(X \leq 0) = \mathop{\Phi}(\frac{0-12}{20})
    \hphantom{\operatorname{P}(X \leq 0)} = \mathop{\Phi}(-0.60)
    \hphantom{\operatorname{P}(X \leq 0)} = 0.2743.

  2. \operatorname{P}(X \leq -4) = \mathop{\Phi}(\frac{-4-12}{20})
    \hphantom{\operatorname{P}(X \leq -4)} = \mathop{\Phi}(-0.80)
    \hphantom{\operatorname{P}(X \leq -4)} = 0.2119.

  3. \operatorname{P}(X > 8) = 1 - \operatorname{P}(X \leq 8)
    \hphantom{\operatorname{P}(X > 8)} = 1 - \mathop{\Phi}(\frac{8-12}{20})
    \hphantom{\operatorname{P}(X > 8)} = 1 - \mathop{\Phi}(-0.20)
    \hphantom{\operatorname{P}(X > 8)} = 0.5793.

  4. \operatorname{P}(4 < X \leq 10) = \operatorname{P}(X \leq 10) - \operatorname{P}(X \leq 4)
    \hphantom{\operatorname{P}(4 < X \leq 10)} = \mathop{\Phi}(\frac{10-12}{20}) - \mathop{\Phi}(\frac{4-12}{20})
    \hphantom{\operatorname{P}(4 < X \leq 10)} = \mathop{\Phi}(-0.10) - \mathop{\Phi}(-0.40)
    \hphantom{\operatorname{P}(4 < X \leq 10)} = 0.1156.

Problem 2 Suppose that X is a normally distributed random variable with mean \mu=10 and standard deviation \sigma=20. Compute the 90%, 95%, and 99% confidence interval for X.

Solution

The (1-\alpha) confidence interval (CI) for X is given by [\mu - z_{\alpha/2} \sigma, \mu + z_{\alpha/2} \sigma] where z_{\alpha/2} = \mathop{\Phi}^{-1}(1-\alpha/2). For example, if you want to compute the z-level corresponding to the 90% confidence interval, then \alpha = 0.10 and \alpha/2 = 0.05, so to compute z_{0.05} you need to type in Excel =norm.s.inv(0.95).

  1. z_{0.05} = \mathop{\Phi}^{-1}(0.95) = 1.64 so the 90% CI for X is [-22.90, 42.90].
  2. z_{0.025} = \mathop{\Phi}^{-1}(0.975) = 1.96 so the 95% CI for X is [-29.20, 49.20].
  3. z_{0.005} = \mathop{\Phi}^{-1}(0.995) = 2.58 so the 99% CI for X is [-41.52, 61.52].

Problem 3 Suppose that X=\ln(Y) is a normally distributed random variable with mean \mu=3.9 and standard deviation \sigma=15.

  1. What is the probability that Y \leq 6?
  2. What is the probability that Y > 4?
  3. What is the probability that 3 < Y \leq 12?
  4. What is the probability that Y \leq 0?
Solution
  1. \operatorname{P}(Y \leq 6) = \operatorname{P}(X \leq \ln(Y))
    \hphantom{\operatorname{P}(Y \leq 6)} = \mathop{\Phi}(\frac{\ln(6)-3.9}{15})
    \hphantom{\operatorname{P}(Y \leq 6)} = \mathop{\Phi}(-0.1405)
    \hphantom{\operatorname{P}(Y \leq 6)} = 0.4441

  2. \operatorname{P}(Y > 4) = 1 - \operatorname{P}(Y \leq 4)
    \hphantom{\operatorname{P}(Y > 4)} = 1 - \operatorname{P}(X \leq \ln(4))
    \hphantom{\operatorname{P}(Y > 4)} = 1 - \mathop{\Phi}(\frac{\ln(4)-3.9}{15})
    \hphantom{\operatorname{P}(Y > 4)} = 1 - \mathop{\Phi}(-0.1676)
    \hphantom{\operatorname{P}(Y > 4)} = 0.5665

  3. \operatorname{P}(3 < Y \leq 12) = \operatorname{P}(Y \leq 12) - \operatorname{P}(Y \leq 3)
    \hphantom{\operatorname{P}(3 < Y \leq 12)} = \mathop{\Phi}(\frac{\ln(12)-3.9}{15}) - \mathop{\Phi}(\frac{\ln(3)-3.9}{15})
    \hphantom{\operatorname{P}(3 < Y \leq 12)} = \mathop{\Phi}(-0.0943) - \mathop{\Phi}(-1868)
    \hphantom{\operatorname{P}(3 < Y \leq 12)} = 0.4624 - 0.4259
    \hphantom{\operatorname{P}(3 < Y \leq 12)} = 0.0365

  4. \operatorname{P}(Y \leq 0) = \operatorname{P}(X \leq -\infty) = 0

Problem 4 Suppose that X is a normally distributed variable with mean \mu=3.70 and standard deviation \sigma=0.80. If Y=e^{X}, what is the probability that Y is greater than 45?

Solution \begin{aligned} \operatorname{P}(Y \leq 45) & = \operatorname{P}(\ln(Y) \leq \ln(45)) \\ & = \operatorname{P}\left( Z \leq \frac{\ln(45) - 3.70}{0.80} \right) \\ & = \Phi(0.1333) \\ & = 0.5530 \end{aligned} Therefore, \operatorname{P}(Y > 45) = 1 - 0.5530 = 0.4470.

Optional Practice Problems

These problems are not required to study for the exam, but can give you some good practice handling mathematical concepts discussed in the notes.

Problem 5 Suppose that X=\ln(Y) is a normally distributed random variable with mean \mu=2.7 and standard deviation \sigma=1. Compute the 90%, 95%, and 99% confidence interval for X and report the corresponding values for Y.

Solution

The (1 - \alpha) confidence interval (CI) for X is given by [\mu - z_{\alpha/2} \sigma, \mu + z_{\alpha/2} \sigma]. Remember that to compute z_{\alpha/2} we use in Excel =norm.s.inv(1-alpha/2). The corresponding interval for Y is then [e^{\mu - z_{\alpha/2} \sigma}, e^{\mu + z_{\alpha/2} \sigma}].

  1. z_{0.05} = 1.64 so the 90% CI for X is [1.06, 4.34], and the corresponding values for Y are [2.87, 77.08].
  2. z_{0.025} = 1.96 so the 95% CI for X is [0.74, 4.66], and the corresponding values for Y are [2.10, 105.63].
  3. z_{0.005} = 2.58 so the 99% CI for X is [0.12, 5.28], and the corresponding values for Y are [1.13, 195.55].

Problem 6 Let Y = e^{\mu + \sigma Z} where \mu = 1, \sigma = 2 and Z \sim \mathcal{N}(0, 1). Compute:

  1. \operatorname{E}(Y)
  2. \operatorname{SD}(Y) = \sqrt{\operatorname{E}(Y^{2}) - \operatorname{E}(Y)^{2}}
  3. \operatorname{E}(Y^{0.3})
  4. \operatorname{E}(Y^{-1})
Solution

In some of the questions we use the fact that if X \sim \mathcal{N}(\mu, \sigma^{2}), then \alpha X \sim \mathcal{N}(\alpha\mu, \alpha^{2}\sigma^{2}), which implies that \operatorname{E}(Y^{\alpha}) = \operatorname{E}(e^{\alpha X}) = e^{\alpha\mu+\frac{1}{2}\alpha^{2}\sigma^{2}}.

  1. \operatorname{E}(Y) = e^{1+\frac{1}{2}2^{2}} = 20.09
  2. \operatorname{E}(Y^{2}) = e^{(2)(1)+\frac{1}{2}(2)^{2}2^{2}}
    \hphantom{\operatorname{E}(Y^{2})} = 22026.47,
    \left(\operatorname{E}(Y)\right)^{2} = (20.09)^{2}
    \hphantom{\left(\operatorname{E}(Y)\right)^{2}} = 403.43,
    \operatorname{SD}(Y) = \sqrt{22026.47-403.43}
    \hphantom{\operatorname{SD}(Y)} = 147.05.
  3. \operatorname{E}(Y^{0.3}) = e^{(0.3)(1)+\frac{1}{2}(0.3)^{2}2^{2}} = 1.62
  4. \operatorname{E}(Y^{-1}) = e^{(-1)(1)+\frac{1}{2}(-1)^{2}2^{2}} = 2.72