Dynamic Asset Pricing Models

Asset Pricing Theory

Juan F. Imbet

Paris Dauphine University-PSL

Dynamic Asset Pricing Models

Hamilton-Jacobi-Bellman Equations

  • Hamilton: William Rowan Hamilton (1805-1865), Irish mathematician and physicist, formulated the Hamiltonian mechanics framework.

  • Jacobi: Carl Gustav Jacob Jacobi (1804-1851), German mathematician, contributed to the development of the Hamiltonian formalism and the Hamilton-Jacobi equation.

  • Bellman: Richard Bellman (1920-1984), American mathematician, developed dynamic programming and the Bellman equation. His book “Dynamic Programming” (1957) laid the foundation for modern optimal control theory.

  • Idea behind HJB: Break down a dynamic optimization problem into smaller subproblems, solve each subproblem optimally, and use these solutions to construct the overall optimal solution.

  • The Dynamic Programming Principle (DPP) states that the value of an optimal control problem at any given time depends only on the current state and the optimal strategy from that point onward.

Basic Setup and Assumptions

Assumptions:

  • Time horizon: Infinite horizon \(t \in [0, \infty)\)
  • Investment opportunity set: Single risky asset with return dynamics
  • Investor preferences: Time-separable utility
  • Markets: Frictionless, continuous trading
  • Information: Full information about state variables

State variables:

  • Wealth: \(W_t\) (investor’s financial wealth)
  • Time: \(t\) (for time-varying preferences/parameters)

Controls:

  • Consumption: \(C_t \geq 0\)
  • Portfolio allocation: \(\omega_t\) (fraction in risky asset)

The HJB Equation in Continuous-Time Optimization

  • The HJB equation arises from the dynamic programming principle. The solution to a continuous-time optimization problem can be characterized by a partial differential equation (PDE) known as the HJB equation.

General Case:

  • Assume a general dynamic optimization problem with state variable \(X_t\) and control variable \(u_t\).

\[ dX_t = \mu(X_t, u_t) dt + \sigma(X_t, u_t) dW_t \]

where \(W_t\) is a standard Brownian motion (\(dW_t \sim N(0, dt)\)).

  • Standard Discounted Utility:

\[ \max_{u_t} E_t \left[ \int_t^\infty e^{-\rho (s-t)} U(X_s, u_s) ds \right] \]

Ito’s Lemma

  • Ito’s lemma is a fundamental result in stochastic calculus that provides a way to compute the differential of a function of a stochastic process.
  • If \(X_t\) follows the stochastic differential equation (SDE): \[ dX_t = \mu(X_t, t) dt + \sigma(X_t, t) dW_t, \] and \(f(X_t, t)\) is a twice-differentiable function, then Ito’s lemma states that: \[ df(X_t, t) = \left( \frac{\partial f}{\partial t} + \mu(X_t, t) \frac{\partial f}{\partial X} + \frac{1}{2} \sigma^2(X_t, t) \frac{\partial^2 f}{\partial X^2} \right) dt + \sigma(X_t, t) \frac{\partial f}{\partial X} dW_t \]

Derivation of the HJB Equation (1D case)

  • Objective (infinite horizon, discounted):

    \[ V(X_t,t)=\max_{u_s,\,s\ge t}\; \mathbb E_t\!\left[\int_t^\infty e^{-\rho (s-t)}\,U(X_s,u_s)\,ds\right],\qquad \rho>0. \]

  • Dynamic Programming Principle (DPP): for small \(dt>0\),

    \[ V(X_t,t)=\max_{u_t}\left\{U(X_t,u_t)\,dt + e^{-\rho dt}\,\mathbb E_t\big[V(X_{t+dt},t+dt)\big]\right\}. \]

  • State dynamics (controlled Itô diffusion, 1D):

    \[ dX_t=\mu(X_t,u_t,t)\,dt+\sigma(X_t,u_t,t)\,dW_t. \]

First-order expansions

  • Discount factor: \(e^{-\rho dt}=1-\rho dt+o(dt)\).

  • Second-order Taylor–Itô expansion of \(V\) around \((X_t,t)\):

    \[ \begin{aligned} V(X_{t+dt},t+dt) &\approx V(X_t,t)+V_t\,dt+V_x\,dX_t+\tfrac12 V_{xx}\,(dX_t)^2+o(dt). \end{aligned} \]

  • Itô moments:

    \[ \mathbb E_t[dW_t]=0,\quad \mathbb E_t[(dW_t)^2]=dt,\quad \mathbb E_t[dX_t]=\mu\,dt,\quad \mathbb E_t[(dX_t)^2]=\sigma^2\,dt. \]

Expected next value

Plug \(dX_t\) into the expansion and take conditional expectation:

\[ \begin{aligned} \mathbb E_t\!\big[V(X_{t+dt},t+dt)\big] &= \mathbb E_t\!\Big[V + V_t\,dt + V_x\,dX_t + \tfrac12 V_{xx}(dX_t)^2\Big] + o(dt) \\ &= V + \Big(V_t + \mu V_x + \tfrac12 \sigma^2 V_{xx}\Big)\,dt + o(dt). \end{aligned} \]

Substitute into DPP

Start from \[ V = \max_{u_t}\left\{U\,dt + e^{-\rho dt}\,\mathbb E_t[V(X_{t+dt},t+dt)]\right\}. \]

Insert the two expansions: \[ \begin{aligned} V &=\max_{u_t}\Big\{U\,dt + (1-\rho dt)\Big[V + \big(V_t+\mu V_x+\tfrac12\sigma^2 V_{xx}\big)\,dt\Big] + o(dt)\Big\}\\ &=\max_{u_t}\Big\{U\,dt + V + \big(V_t+\mu V_x+\tfrac12\sigma^2 V_{xx}\big)\,dt - \rho V\,dt + o(dt)\Big\} \\ \rho V &=\max_{u_t}\Big\{U + V_t+\mu V_x+\tfrac12\sigma^2 V_{xx} - \rho V + \frac{o(dt)}{dt}\Big\}. \end{aligned} \]

Letting \(dt\to0\) yields the HJB equation which is a Partial Differential Equation (PDE). \[ \rho V = \max_{u}\left\{U + V_t + \mu V_x + \tfrac12\sigma^2 V_{xx}\right\}. \]

Boundary / terminal conditions

  • The HJB equations captures the shape of the value function in the interior of the domain. To pin down a unique solution, we need to specify appropriate boundary/terminal conditions.

  • E.g. Finite horizon \([0,T]\) with terminal payoff \(G\):

    \[ V(x,T)=G(x),\quad 0=\max_u\left\{U(x,u) + V_t + \mu V_x + \tfrac12\sigma^2 V_{xx} - \rho V\right\}\ \text{on } [0,T). \]

  • Exit/reflecting boundaries: specify appropriate boundary conditions at the domain ends (e.g., Dirichlet \(V=\phi\) or Neumann \(V_x=0\)) depending on the economics.

Optimal control characterization (generic)

  • Hamiltonian (1D):

    \[ \mathcal H(x,t,u;V,V_x,V_{xx})=U(x,u)+\mu(x,u,t)\,V_x+\tfrac12\,\sigma^2(x,u,t)\,V_{xx}. \]

  • Optimizer:

    \[ u^*(x,t)\in \arg\max_{u\in\mathcal U}\ \mathcal H(x,t,u;V,V_x,V_{xx}). \]

  • Interior FOC (plus lagrangian in case of constrained control):

    \[ \frac{\partial}{\partial u}\Big[U(x,u)+\mu(x,u,t)\,V_x+\tfrac12\,\sigma^2(x,u,t)\,V_{xx}\Big]\Big|_{u=u^*}=0, \]

The Hamilton-Jacobi-Bellman Equation in Dynamic Portfolio Choice (Merton, 1971)

  • A representative investor chooses how much to consume and how to allocate wealth between a risky asset and a risk-free asset to maximize expected utility over an infinite horizon. \(\omega_t\) is the fraction of wealth invested in the risky asset.

Value function definition:

\[ V(W, t) = \max_{C_s, \omega_s} E_t\left[ \int_t^\infty e^{-\rho(s-t)} U(C_s) ds \right] \]

HJB equation derivation:

\[ \rho V(W,t) = \max_{C,\omega} \left\{ U(C) + \frac{\partial V}{\partial t} + \frac{\partial V}{\partial W} \mu_W + \frac{1}{2} \frac{\partial^2 V}{\partial W^2} \sigma_W^2 \right\} \]

where:

  • \(\mu_W = \omega W \mu + (1-\omega) W r - C = \omega W (\mu - r) + W r - C\) (wealth drift)
  • \(\sigma_W^2 = W^2 \omega^2 \sigma^2\) (wealth variance)

Final HJB equation: \(\rho V(W,t) = \max_{C,\omega} \left\{ U(C) + \frac{\partial V}{\partial t} + \frac{\partial V}{\partial W} [r W + \omega W (\mu - r) - C] + \frac{1}{2} \frac{\partial^2 V}{\partial W^2} W^2 \omega^2 \sigma^2 \right\}\)

Power Utility Solution

Assumptions for Power Utility

Utility function:

\[ U(C) = \frac{C^{1-\gamma}}{1-\gamma}, \quad \gamma > 0, \gamma \neq 1 \] Key properties:

  • Constant relative risk aversion: \(RRA = -\frac{U''(C) C}{U'(C)} = \gamma\)
  • Elasticity of intertemporal substitution: \(EIS = \frac{1}{\gamma}\)
  • Solving a HJB required a guess for the value function. How it depends on wealth and time.
  • A common assumption that simplifies the problem is to assume that the value function is separable in wealth and time.

Guess for value function:

\[ V(W,t) = \frac{1}{1-\gamma} W^{1-\gamma} v(t) \]

Motivation: Homothetic preferences (scaling wealth scales utility proportionally) allow separation of wealth and time effects.

Step-by-Step HJB Solution

Substitute guess into HJB:

\[ \rho \cdot \frac{1}{1-\gamma} W^{1-\gamma} v(t) = \max_{C,\omega} \left\{ \frac{C^{1-\gamma}}{1-\gamma} + \frac{\partial V}{\partial t} + V_W [r W + \omega W (\mu - r) - C] + \frac{1}{2} V_{WW} W^2 \omega^2 \sigma^2 \right\} \]

Compute partial derivatives:

\[ \frac{\partial V}{\partial t} = \frac{1}{1-\gamma} W^{1-\gamma} v'(t) \] \[ \frac{\partial V}{\partial W} = W^{-\gamma} v(t) \] \[ \frac{\partial^2 V}{\partial W^2} = -\gamma W^{-\gamma-1} v(t) \]

Substitute derivatives:

\[ \rho \cdot \frac{1}{1-\gamma} v(t) W^{1-\gamma} = \max_{C,\omega} \left\{ \frac{C^{1-\gamma}}{1-\gamma} + \frac{1}{1-\gamma} W^{1-\gamma} v'(t) + W^{-\gamma} v(t) [r W + \omega W (\mu - r) - C] + \frac{1}{2} (-\gamma W^{-\gamma-1} v(t)) W^2 \omega^2 \sigma^2 \right\} \]

Divide both sides by \(\dfrac{W^{1-\gamma}}{1-\gamma}\) (i.e., multiply by \(\tfrac{1-\gamma}{W^{1-\gamma}}\)):

\[ \rho v(t) = \max_{C,\omega} \left\{ C^{1-\gamma} W^{-(1-\gamma)} + v'(t) + (1-\gamma) v(t) \big[ r + \omega (\mu - r) - C/W \big] - \tfrac{1}{2} \gamma (1-\gamma) v(t) \omega^2 \sigma^2 \right\} \]

Continued HJB Solution

First-order condition for consumption:

\[ \frac{\partial}{\partial C} \left[ C^{1-\gamma} W^{-(1-\gamma)} - (1-\gamma) v(t) \frac{C}{W} \right] = 0 \] \[ C^{-\gamma} W^{-(1-\gamma)} - v(t) W^{-1} = 0 \] \[ C^{-\gamma} = v(t) W^{-\gamma} \]

\[ \frac{C^*}{W} = v(t)^{-1/\gamma} \quad \Longrightarrow \quad C^* = W\, v(t)^{-1/\gamma} \]

First-order condition for portfolio:

\[ \frac{\partial}{\partial \omega} [v(t) \omega (\mu - r) - \frac{1}{2} \gamma v(t) \omega^2 \sigma^2] = 0 \] \[ v(t) (\mu - r) - \gamma v(t) \omega \sigma^2 = 0 \] \[ \omega^* = \frac{\mu - r}{\gamma \sigma^2} \]

Substitute optimal controls back into the normalized HJB and simplify:

\[ \rho v(t) = v'(t) + (1-\gamma) r \, v(t) + (1-\gamma) \, \frac{(\mu - r)^2}{2\,\gamma\,\sigma^2} \, v(t) + \gamma\, v(t)^{1 - 1/\gamma} \]

Continued HJB Solution

Simplify step by step:

\[ \begin{align*} v'(t) = \Big[ \rho - (1-\gamma) r - (1-\gamma) \tfrac{(\mu - r)^2}{2 \gamma \sigma^2} \Big] v(t) - \gamma\, v(t)^{1 - 1/\gamma} \end{align*} \]

  • This is a nonlinear ODE. It becomes linear after the change of variables \[ y(t) := v(t)^{1/\gamma} \quad\Longrightarrow\quad v(t) = y(t)^\gamma,\ \ v'(t) = \gamma y(t)^{\gamma-1} y'(t). \] Derivation (holds for any \(t\)): starting from \[ v'(t) = a\,v(t) - \gamma\,v(t)^{1 - 1/\gamma},\quad a := \rho - (1-\gamma) r - (1-\gamma)\,\frac{(\mu-r)^2}{2\gamma\sigma^2}, \] use \(v=y^\gamma\) and \(v' = \gamma y^{\gamma-1} y'\) to get \[ \gamma\,y^{\gamma-1}\,y' = a\,y^\gamma - \gamma\,y^{\gamma-1}. \] Divide both sides by \(\gamma\,y^{\gamma-1}>0\) (since \(v>0\) under power utility) to obtain the linear ODE \[ y'(t) = \frac{a}{\gamma}\,y(t) - 1,\qquad a := \rho - (1-\gamma) r - (1-\gamma)\,\frac{(\mu-r)^2}{2\gamma\sigma^2}. \] Under stationarity, set \(y'(t)=0\) to recover \(\bar y = \gamma/a\).

Closed-form solution for v(t)

  • The linear ODE for \(y\) has the explicit solution (for \(a\ne0\)) \[ y(t) = \frac{\gamma}{a} + \Big(y(0) - \frac{\gamma}{a}\Big) e^{\frac{a}{\gamma} t},\qquad y(0)=v(0)^{1/\gamma}. \] Hence \[ v(t) = y(t)^{\gamma} = \left( \frac{\gamma}{a} + \Big(v(0)^{1/\gamma} - \frac{\gamma}{a}\Big) e^{\frac{a}{\gamma} t} \right)^{\!\gamma}. \] If \(a=0\), then \(y'(t)=-1\) so \(y(t)=y(0)-t\) and \(v(t)=y(t)^{\gamma}\) until the horizon where \(y\) remains positive.

Stationary consumption (fixed fraction of wealth)

  • Assume a stationary policy with consumption a constant fraction of wealth: \[ C_t = \kappa W_t,\qquad \kappa>0\ \text{constant}. \]

  • From the FOC obtained earlier, \(\dfrac{C_t^*}{W_t} = v(t)^{-1/\gamma} = y(t)^{-1}\). Stationarity therefore requires \(v(t)\equiv \bar v\) (equivalently \(y(t)\equiv \bar y\)) constant and \[ 0 = y'(t) = \frac{a}{\gamma}\,\bar y - 1 \quad\Longrightarrow\quad \bar y = \frac{\gamma}{a},\qquad \kappa = \frac{C_t^*}{W_t} = \bar y^{-1} = \frac{a}{\gamma}. \]

    Small derivation and link to the FOC:

    • Recall the linear ODE for \(y\): \(\;y'(t)=\dfrac{a}{\gamma}\,y(t)-1\). A stationary policy means \(y(t)\) is constant over time, so \(y'(t)=0\) and hence \[ 0 = \frac{a}{\gamma}\,\bar y - 1 \quad \Longrightarrow \quad \bar y = \frac{\gamma}{a}. \]

Stationary consumption (fixed fraction of wealth) continued

  • From the consumption FOC we already obtained \(\dfrac{C_t^*}{W_t}=v(t)^{-1/\gamma}=y(t)^{-1}\). Evaluated at stationarity, this gives \[ \kappa := \frac{C_t^*}{W_t}=\bar y^{-1}=\frac{a}{\gamma}. \] Interpretation: a larger \(a\) (effective impatience net of return and risk adjustments) raises the optimal consumption-wealth ratio \(\kappa\).
  • Using the definition of \(a\): \[ \kappa = \frac{1}{\gamma}\Big[\rho - (1-\gamma) r - (1-\gamma)\,\frac{(\mu - r)^2}{2\gamma\sigma^2}\Big]. \]
  • Feasibility/boundedness: \(\kappa>0\ \Leftrightarrow\ a>0\ \Leftrightarrow\ \rho > (1-\gamma) r + (1-\gamma)\,\dfrac{(\mu - r)^2}{2\gamma\sigma^2}\).

Stationary solution summary

  • Constant value-function coefficient: \[ v(t) \equiv \bar v = \Big(\tfrac{\gamma}{a}\Big)^{\!\gamma},\qquad a=\Big[\rho - (1-\gamma) r - (1-\gamma)\,\tfrac{(\mu - r)^2}{2\gamma\sigma^2}\Big]. \]
  • Optimal stationary policies: \[ \frac{C_t^*}{W_t} = \kappa = \frac{a}{\gamma},\qquad \omega_t^* = \frac{\mu - r}{\gamma\,\sigma^2}. \]
  • Stationarity and bounded value require \[ \kappa>0 \ \Leftrightarrow\ a>0 \ \Leftrightarrow\ \rho > (1-\gamma) r + (1-\gamma)\,\tfrac{(\mu - r)^2}{2\gamma\sigma^2}. \]

Solve for the stationary Part 2

  • We assumed separability between \(W\) and \(t\) in the value function, e.g. \(V(W,t)=f(W)v(t)\) and realized that a good candidate for \(f(W)\) would have the same shape as the utility function.
  • This is because a permanent increase in consumption in the infinite horizon problem, should have the same impact on the value function.

For \(\Delta>0\), \[ \begin{aligned} \mathbb{E}\Big[\int_t^\infty e^{-\rho(s-t)} U(C_s\times \Delta) ds \Big] &= \Delta^{1-\gamma} \mathbb{E}\Big[\int_t^\infty e^{-\rho(s-t)} U(C_s) ds \Big] \\ &= \Delta^{1-\gamma} V(W, t) \\ &= V(\Delta W,t) \end{aligned} \]

Continuation, retrieving the shape of the value function.

  • Assume a stationary problem \(V=V(W)\)
  • The HJB becomes

\[ \rho V(W) = \max_{C,\omega} \left\{ U(C) + \frac{\partial V}{\partial W} (\omega W (\mu - r) + W r - C) + \frac{1}{2} \frac{\partial^2 V}{\partial W^2} W^2 \omega^2 \sigma^2 \right\} \]

First Order Conditions

\[ \begin{aligned} U'(C) &= \frac{\partial V}{\partial W}\\ C^{-\gamma} = \frac{\partial V}{\partial W} \\ C = \Big(\frac{\partial V}{\partial W}\Big)^{-\frac{1}{\gamma}} \\ \end{aligned} \]

\[ \begin{aligned} \frac{\partial V}{\partial W} (W(\mu-r)) + \frac{\partial^2 V}{\partial W^2}(\omega W^2 \sigma^2) &= 0\\ \omega &= \frac{- \frac{\partial V}{\partial W} (W(\mu-r)) }{\frac{\partial^2 V}{\partial W^2} W^2 \sigma^2} \end{aligned} \]

Continuation

From the FOCs \[ C=(V_W)^{-1/\gamma},\qquad \omega^*=-\,\frac{V_W\,W(\mu-r)}{V_{WW}\,W^2\sigma^2}=-\,\frac{\mu-r}{\sigma^2}\,\frac{V_W}{W V_{WW}}, \] plug back into the HJB: \[ \rho V(W) = \frac{(V_W)^{-\frac{1-\gamma}{\gamma}}}{1-\gamma} + V_W\big(\omega^* W(\mu-r)+Wr-C\big) + \tfrac12 V_{WW} W^2 (\omega^*)^2 \sigma^2. \]

Compute the \(\omega\)-parts:

\[ \omega^* W(\mu-r)=-\,\frac{V_W(\mu-r)^2}{V_{WW}\sigma^2},\qquad \tfrac12 V_{WW}W^2(\omega^*)^2\sigma^2=\frac12\,\frac{V_W^2(\mu-r)^2}{V_{WW}\sigma^2}. \] Hence \[ V_W\big(\omega^* W(\mu-r)\big)+\tfrac12 V_{WW}W^2(\omega^*)^2\sigma^2 = -\,\frac12\,\frac{(\mu-r)^2}{\sigma^2}\,\frac{V_W^2}{V_{WW}}. \]

Also \(C=(V_W)^{-1/\gamma}\) gives

\[ U(C)=\frac{(V_W)^{-\frac{1-\gamma}{\gamma}}}{1-\gamma},\qquad -\,V_W C = -(V_W)^{1-\frac1\gamma}. \]

Continuation

Therefore, the HJB reduces to the following nonlinear second-order ODE in \(V\): \[ \begin{aligned} \rho V(W) &=\frac{(V_W)^{1-\frac{1}{\gamma}}}{1-\gamma} + r\,W\,V_W -(V_W)^{1-\frac1\gamma} -\frac{1}{2}\,\frac{(\mu-r)^2}{\sigma^2}\,\frac{V_W^{2}}{V_{WW}} \\ &=\Big[\frac{1}{1-\gamma}-1\Big](V_W)^{1-\frac{1}{\gamma}} + r\,W\,V_W -\frac{1}{2}\,\frac{(\mu-r)^2}{\sigma^2}\,\frac{V_W^{2}}{V_{WW}} \end{aligned} \]

What do we know about \(V\)?

\[ V(\Delta W) = \Delta^{1-\gamma} V(W) \]

Only a function of the form \(\kappa W^{1-\gamma}\) has that property. Define the guess

\[ V(W) = \frac{K}{1-\gamma} W^{1-\gamma} \]

Solving the ODE (by homogeneity)

The problem is scale-invariant (CRRA utility, linear wealth dynamics, no other state), so \(V(\lambda W)=\lambda^{\,1-\gamma}V(W)\). Hence \[ V(W)=\frac{K}{1-\gamma}\,W^{1-\gamma},\qquad K>0. \] Then \[ V_W=K\,W^{-\gamma},\qquad V_{WW}=-\gamma K\,W^{-\gamma-1}. \]

Plug into the ODE:

  • \((V_W)^{-\frac{1-\gamma}{\gamma}}=K^{-\frac{1-\gamma}{\gamma}}W^{1-\gamma}\),
  • \((V_W)^{1-\frac1\gamma}=K^{\frac{\gamma-1}{\gamma}}W^{1-\gamma}\),
  • \(\dfrac{V_W^2}{V_{WW}}=-\dfrac{K}{\gamma}\,W^{1-\gamma}\),
  • \(r\,W\,V_W=rK\,W^{1-\gamma}\),
  • \(\rho V=\rho\,\dfrac{K}{1-\gamma}\,W^{1-\gamma}\).

Continuation

Divide by \(W^{1-\gamma}\) to obtain an algebraic equation for \(K\): \[ \rho\,\frac{K}{1-\gamma} =\frac{K^{-\frac{1-\gamma}{\gamma}}}{1-\gamma} +rK - K^{\frac{\gamma-1}{\gamma}} +\frac{1}{2}\frac{(\mu-r)^2}{\sigma^2}\,\frac{K}{\gamma}. \]

Let \(y:=K^{1/\gamma}\) (so \(K=y^\gamma\)). Using \(K^{-\frac{1-\gamma}{\gamma}}=y^{\gamma-1}\) and \(K^{\frac{\gamma-1}{\gamma}}=y^{\gamma-1}\), this collapses to \[ \rho\,y =\gamma+(1-\gamma)\,r\,y +\frac{1-\gamma}{2\gamma}\frac{(\mu-r)^2}{\sigma^2}\,y. \] Thus \[ \boxed{\; y=\frac{\gamma}{\rho-(1-\gamma)\!\left(r+\frac{(\mu-r)^2}{2\gamma\sigma^2}\right)} \;},\qquad K=y^\gamma. \]

Continuation

Therefore the value function and policies are \[ \boxed{\; V(W)=\frac{W^{1-\gamma}}{1-\gamma}\left[\frac{\gamma}{\rho-(1-\gamma)\!\left(r+\frac{(\mu-r)^2}{2\gamma\sigma^2}\right)}\right]^{\!\gamma} \;} \]

Optimal controls: \[ \omega^*=\frac{\mu-r}{\gamma\sigma^2},\qquad \frac{C^*}{W}=(V_W)^{-1/\gamma}=y^{-1} =\frac{\rho-(1-\gamma)\!\left(r+\frac{(\mu-r)^2}{2\gamma\sigma^2}\right)}{\gamma}. \]

Log Utility Solution

Log Utility Assumptions

Utility function:

\[ U(C) = \ln C \]

Properties:

  • Constant relative risk aversion: \(RRA = 1\)
  • Elasticity of intertemporal substitution: \(EIS = 1\)

Guess for value function:

\[ V(W,t) = \ln W + u(t) \]

Log Utility HJB Solution

Substitute into HJB:

\[ \rho (\ln W + u(t)) = \max_{C,\omega} \left\{ \ln C + \frac{\partial V}{\partial t} + V_W [r W + \omega W (\mu - r) - C] + \frac{1}{2} V_{WW} W^2 \omega^2 \sigma^2 \right\} \]

Compute derivatives:

\[ \frac{\partial V}{\partial t} = u'(t), \frac{\partial V}{\partial W} = \frac{1}{W}, \frac{\partial^2 V}{\partial W^2} = -\frac{1}{W^2} \]

Continued Log Utility HJB Solution

Substitute:

\[ \rho (\ln W + u(t)) = \max_{C,\omega} \left\{ \ln C + u'(t) + \frac{1}{W} [r W + \omega W (\mu - r) - C] - \frac{1}{2} \frac{1}{W^2} W^2 \omega^2 \sigma^2 \right\} \]

\[ \rho \ln W + \rho u(t) = \max_{C,\omega} \left\{ \ln C + u'(t) + r + \omega (\mu - r) - \frac{C}{W} - \frac{1}{2} \omega^2 \sigma^2 \right\} \]

First-order condition for consumption:

\[ \frac{\partial}{\partial C} [\ln C - \frac{C}{W}] = 0 \] \[ \frac{1}{C} - \frac{1}{W} = 0 \] \[ C^* = W \]

Final Log Utility HJB Solution

First-order condition for portfolio:

\[ \frac{\partial}{\partial \omega} [\omega (\mu - r) - \frac{1}{2} \omega^2 \sigma^2] = 0 \] \[ (\mu - r) - \omega \sigma^2 = 0 \] \[ \omega^* = \frac{\mu - r}{\sigma^2} \]

Substitute back:

\[ \rho \ln W + \rho u(t) = \ln W + u'(t) + r + \frac{\mu - r}{\sigma^2} (\mu - r) - 1 - \frac{1}{2} \left( \frac{\mu - r}{\sigma^2} \right)^2 \sigma^2 \]

\[ \rho \ln W + \rho u(t) = \ln W + u'(t) + r + \frac{(\mu - r)^2}{\sigma^2} - 1 - \frac{(\mu - r)^2}{2 \sigma^2} \]

\[ \rho \ln W + \rho u(t) = \ln W + u'(t) + r + \frac{(\mu - r)^2}{2 \sigma^2} - 1 \]

Final Log Utility HJB Solution

Collect constant terms:

\[ \rho u(t) = u'(t) + r - 1 + \frac{(\mu - r)^2}{2 \sigma^2} \]

ODE for \(u(t)\):

\[ u'(t) = \rho u(t) - r + 1 - \frac{(\mu - r)^2}{2 \sigma^2} \]

Solution:

This is a linear ODE of the form

\[ u'(t) = \rho u(t) + c, \]

which has the general solution

\[ u(t) = K e^{\rho t} - \frac{c}{\rho}, \]

where \(K\) is a constant determined by boundary conditions.

Boundary Conditions and Long-Term Behavior

  • How to pin down any constants in the solution?
  • Look at behavior at time 0 and as time goes to infinity.
  • For example, make the value function bounded as time goes to infinity.
  • Value functions at zero normally are set depending on the context of the problem.

Boundary conditions in portfolio choice

  • Initial wealth: \(W_0 > 0\) (given)
  • Terminal condition: as \(t \to \infty\), we typically want the value function to remain bounded.
  • This often implies that \(u(t)\) should not grow faster than exponentially with rate \(\rho\).
  • In the log utility case, we can set \(K = 0\) to ensure boundedness as \(t \to \infty\).
  • How to pin down \(v(0)\) in the power utility case? Merton assumes stationarity, so \(v(t)\) is constant over time. But in finite horizon setups, this would be determined by the terminal condition.