Fall 2024
Example 1 If \Omega = \{ \omega_{1}, \omega_{2}, \omega_{3} \}, then \begin{aligned} \mathcal{P}(\Omega) & = \{ \emptyset, \{\omega_{1}\}, \{\omega_{2}\}, \{\omega_{3}\}, \{\omega_{1}, \omega_{2}\}, \{\omega_{2}, \omega_{3}\}, \{\omega_{1}, \omega_{3}\}, \{\omega_{1}, \omega_{2}, \omega_{3}\}\} \end{aligned} defines the collection of all possible events that we can measure. Note that the cardinality of \mathcal{P}(\Omega) grows exponentially with the size of \Omega.
The function \operatorname{P} such that \operatorname{P}(\omega_{1}) = 1/2, \operatorname{P}(\omega_{2}) = 1/4, and \operatorname{P}(\omega_{3}) = 1/4 defines a probability measure on \Omega.
We have, for example, that \operatorname{P}(\{\omega_{1}, \omega_{3}\}) = 1/2 + 1/4 = 3/4.
Example 2 Consider a sample space with four possible outcomes \Omega = \{ \omega_{1}, \omega_{2}, \omega_{3}, \omega_{4} \}. The table below describes the possible values of three random variables denoted by X, Y and Z.
Outcome | X | Y | Z |
---|---|---|---|
\omega_{1} | -10 | 20 | 15 |
\omega_{2} | -5 | 10 | -10 |
\omega_{3} | 5 | 0 | 15 |
\omega_{4} | 10 | 0 | -10 |
Note that the information sets generated by each random variable are different.
Example 3 Consider the sample space \Omega = \{ \omega_{1}, \omega_{2}, \omega_{3} \} in which we define the probability measure \operatorname{P} such that \operatorname{P}(\omega_{1}) = 1/2, \operatorname{P}(\omega_{2}) = 1/4, and \operatorname{P}(\omega_{3}) = 1/4. There are two random variables X and Y that take values in \Omega according to the table below.
Outcome | Probability | X | Y |
---|---|---|---|
\omega_{1} | 1/2 | 10 | 2 |
\omega_{2} | 1/4 | 8 | 40 |
\omega_{3} | 1/4 | 4 | 20 |
Using this information, we can compute \operatorname{E}(X) = 8, \operatorname{E}(Y) = 16, \operatorname{V}(X) = 6, \operatorname{V}(Y) = 246. The standard deviations of X and Y are \sigma_{X} = \sqrt{6} \approx 2.45 and \sigma_{Y} = \sqrt{246} \approx 15.68, respectively.
Example 4 Continuing with Example 3, we have that \operatorname{Cov}(X, Y) = -18. Thus, \rho_{X, Y} \approx -0.47.
Example 5 Suppose we define a probability measure \operatorname{P} to the random variables X and Y defined in Example 2 according to the table below.
Outcome | \operatorname{P} | X | Y |
---|---|---|---|
\omega_{1} | 0.10 | -10 | 20 |
\omega_{2} | 0.30 | -5 | 10 |
\omega_{3} | 0.40 | 5 | 0 |
\omega_{4} | 0.20 | 10 | 0 |
We have that the probability mass function of X and Y are \begin{aligned} p_{X}(x) = \begin{cases} 0.10 & \text{if } x = -10, \\ 0.30 & \text{if } x = -5, \\ 0.40 & \text{if } x = 5, \\ 0.20 & \text{if } x = 10. \end{cases} \end{aligned} \qquad \begin{aligned} p_{Y}(y) = \begin{cases} 0.60 & \text{if } y = 0, \\ 0.30 & \text{if } y = 10, \\ 0.10 & \text{if } y = 20. \end{cases} \end{aligned}
If a random variable is defined for m different values of x, we can re-write the expectation of a random variable as \operatorname{E}(X) = \sum_{i = 1}^{m} x_{i} p_X(x_{i}), \tag{1} which is commonly used in statistics.
Example 6 The joint pmf of the random variables defined in Example 5 is given in the table below.
\small \begin{array}{c|cccc} X \setminus Y & 0 & 10 & 20 \\ \hline -10 & 0 & 0 & 0.1 \\ -5 & 0 & 0.3 & 0 \\ 5 & 0.4 & 0 & 0 \\ 10 & 0.2 & 0 & 0 \end{array} The function p_{X, Y}(x, y) has many zeros since in Example 5 there are only four outcomes. Any other outcome has probability zero of occurring.
Example 7 We can generate any joint pmf for two random variables as long as the sum of all probabilities is equal to one. The table below reports the joint probabilities of a random variable X taking values in [-1, 0, 1] and a random variable Y taking values in [0, 1, 2, 3]. \small \begin{array}{c|cccc} X \setminus Y & 0 & 1 & 2 & 3 \\ \hline -1 & 0.12500 & 0.09375 & 0.06250 & 0.03125 \\ 0 & 0.06250 & 0.12500 & 0.12500 & 0.06250 \\ 1 & 0.03125 & 0.06250 & 0.09375 & 0.12500 \end{array}
In this case the underlying probability space has at least 3 \times 4 = 12 possible outcomes.
Figure 2: The figure plots the joint probability mass function of X and Y in Example 7.
Weather | Sunny | Fair | Rainy |
---|---|---|---|
Probability | 0.3 | 0.5 | 0.2 |
Stock | Up | Down |
---|---|---|
Probability | 0.6 | 0.4 |
Stock\Weather | Sunny | Fair | Rain |
---|---|---|---|
Up | 0.18 | 0.30 | 0.12 |
Down | 0.12 | 0.20 | 0.08 |
Example 8 Consider two random variables X and Y defined in the table below.
Outcome | \operatorname{P} | X | Y |
---|---|---|---|
\omega_{1} | 0.40 | -1 | 0 |
\omega_{2} | 0.30 | 1 | 1 |
\omega_{3} | 0.30 | 1 | -1 |
We have that \operatorname{E}(X) = 0.2, \operatorname{E}(Y) = 0, and \operatorname{E}(XY) = 0. Therefore, \operatorname{Cov}(X, Y) = 0 - 0.2 \times 0 = 0, which shows that X and Y are uncorrelated.
However, the two random variables are not independent. If we know that X = -1 then we know that Y = 0. Similarly, learning that Y = 1 tells us that X = 1.
Example 9 Suppose that X_{1}, X_{2}, \ldots, X_{n} are independent random variables with the same variance denoted by \sigma^{2}. Define X to be the sum of these random variables so that X = X_{1} + X_{2} + \ldots X_{n}. Thus, \operatorname{V}(X) = \sum_{i = 1}^{n} \operatorname{V}(X_{i}) = n \sigma^{2}. This is the result that we use in finance to annualize the variance computed using monthly or daily stock returns.