The Geometry of the Payoff Space
Fall 2026
Introduction
- I describe the mathematical structure of the payoff space that we will use to characterize the space of traded payoffs and stochastic discount factors.
- Even though the results described in this note apply to infinite dimensional Hilbert spaces, we will restrict our attention to the study of finite dimensional Euclidean spaces.
Probability Structure
- Uncertainty is represented by a finite set \mathcal{S} = \{1, \ldots, S\} of states, defining a finite probability space (\mathcal{S}, q).
- The set of all random variables defined in \mathcal{S} is denoted by L and is called the payoff space.
- Thus, for any x \in L we have that the vector (x(1), x(2), \ldots, x(S)) \in \mathbb{R}^{S} defines all the possible payoffs in each state, and the probability of getting a payoff in a particular state is given by \Pr(x(s)) = q(s) for all s \in S.
The Payoff Space
- The payoff space is clearly a linear vector space since for any x, y \in L and \alpha, \beta \in \mathbb{R} we have that \alpha x + \beta y \in L.
- We endow the payoff space with an inner product \langle{\cdot, \cdot}\rangle: L \times L \rightarrow \mathbb{R} defined such that for any x, y \in L, we have that
\langle{x, y}\rangle = \operatorname{E}(xy) = \sum_{s = 1}^{S} q(s) x(s) y(s).
- In finite-dimensional spaces, we can use the inner product to define the Euclidean norm \lVert{\cdot}\rVert: L \rightarrow \mathbb{R}^{+} for all x \in L as
\lVert{x}\rVert = \sqrt{\langle{x, x}\rangle}.
Projections
- Given x, y \in L, consider the vectors y_{x} = \alpha x and z = y - y_{x}.
- We say that y_{x} is the projection of y on the subspace generated by \{x\} if the norm of z is minimal.
- To obtain the projection, we need to compute the \alpha that minimizes \Vert z \Vert = \Vert y - \alpha x \Vert = \operatorname{E}(y - \alpha x)^{2}.
- The first-order condition of this problem is:
0 = \operatorname{E}((y - \alpha x) x) = \langle{y - \alpha x, x}\rangle = \langle z, x \rangle,
which implies that \alpha = \dfrac{\langle{x, y}\rangle}{\langle{x, x}\rangle} and \langle z, y_{x} \rangle = 0.
Orthogonal Decomposition
- We say that two vectors x, y \in L are orthogonal if their inner product is equal to zero.
- Thus, we have that y_{x} \mathrel\bot z, implying that the vector y can be decomposed into two orthogonal components.
- Indeed, we have that
\lVert{y}\rVert^{2} = \lVert{z + y_{x}}\rVert^{2} = \lVert{z}\rVert^{2} + 2 \langle z, y_{x} \rangle + \lVert{y_{x}}\rVert^{2} = \lVert{z}\rVert^{2} + \lVert{y_{x}}\rVert^{2},
which is a generalization of the classical Pythagorean theorem.
Cauchy-Schwartz Inequality
- The orthogonal decomposition implies that \Vert y \Vert^{2} \geq \Vert y_{x} \Vert^{2}, with equality occuring whenever y is proportional to x.
- Therefore, we have that
\lVert{y}\rVert^{2} \geq \lVert{y_{x}}\rVert^{2} = \left\lVert \frac{\langle{x, y}\rangle}{\langle{x, x}\rangle} x \right\rVert^{2} = \frac{\langle{x, y}\rangle^{2}}{\lVert{x}\rVert^{2}}.
- The previous expression is known as the Cauchy-Schwartz inequality and is fundamental in the study of Euclidean vector spaces.
- Given x, y \in L we have that
|\langle{x, y}\rangle| \leq \lVert{x}\rVert \lVert{y}\rVert.
\tag{1}
Linear Functionals
- Given x, y \in L and \alpha, \beta \in \mathbb{R}, a linear functional f: L \rightarrow \mathbb{R} satisfies
f(\alpha x + \beta y) = \alpha f(x) + \beta f(y).
- We say that the linear functional f : L \rightarrow \mathbb{R} is bounded if
|f(x)| \leq M \lVert{x}\rVert
for all x \in L.
- A bounded linear functional is also called a continuous linear functional.
- The smallest M for which this inequality remains true is called the norm of f, i.e.,
\lVert{f}\rVert = \inf \{M: |f(x)| \leq M \lVert{x}\rVert, \text{ for all } x \in L\}.
Inner Product Is A Linear Functional
- For a given m \in L and any x \in L, the functional
f(x) = \langle{m, x}\rangle = \operatorname{E}(m x) = \sum_{s = 1}^{S} q(s) m(s) x(s)
is linear.
- Furthermore, the Cauchy-Schwartz inequality implies that
|f(x)| = |\langle{m, x}\rangle| \leq \lVert{m}\rVert \lVert{x}\rVert,
showing that the linear functional f is bounded and hence continuous.
- Since the previous inequality is an equality whenever x is proportional to m, we have that \lVert{m}\rVert is the smallest bound of f, showing that \lVert{f}\rVert = \lVert{m}\rVert.
Hyperplanes
- Consider a linear functional f: L \rightarrow \mathbb{R}.
- The set K = \{x \in L: f(x) = 0\} describes a hyperplane that can be described by a normal vector z.
- Thus, \langle{x, z}\rangle = 0 for all x \in K.
- Without loss of generality, assume that z has been appropriately scaled so that f(z) = 1.
A Linear Functional is an Inner Product
- Given any x \in L, we have that x - f(x) z \in K since f(x - f(x) z) = f(x) - f(x) f(z) = 0.
- Moreover, z \mathrel\bot K, implying that
0 = \langle{x - f(x) z, z}\rangle = \langle{x, z}\rangle - f(x) \langle{z, z}\rangle.
- The previous expression implies that
f(x) = \frac{\langle{x, z}\rangle}{\langle{z, z}\rangle} = \langle{x, m}\rangle,
where m = \dfrac{z}{\lVert{z}\rVert^{2}}.
The Riesz Representation Theorem
- If f: L \rightarrow \mathbb{R} is a bounded linear functional, there exists a unique vector m \in L such that for all x \in L, f(x) = \langle{m, x}\rangle.
- Furthermore, we have \lVert{f}\rVert = \lVert{y}\rVert and every m determines a unique bounded linear functional.