THE JOURNAL OF FINANCE
•
VOL. LXVIII, NO. 3
•
JUNE 2013
Can Time-Varying Risk of Rare Disasters Explain
Aggregate Stock Market Volatility?
JESSICA A. WACHTER
∗
ABSTRACT
Why is the equity premium so high, and why are stocks so volatile? Why are stock
returns in excess of government bill rates predictable? This paper proposes an answer
to these questions based on a time-varying probability of a consumption disaster. In
the model, aggregate consumption follows a normal distribution with low volatility
most of the time, but with some probability of a consumption realization far out in
the left tail. The possibility of this poor outcome substantially increases the e quity
premium, while time-variation in the probability of this outcome drives high stock
market volatility and excess return predictability.
THE MAGNITUDE OF THE expected excess return on stocks relative to bonds (the
equity premium) constitutes one of the major puzzles in financial economics. As
Mehra and Prescott (1985)show,thefluctuationsobservedintheconsumption
growth rate over U.S. history predict an equity premium that is far too small,
assuming reasonable levels of risk aversion.
1
One proposed explanation is that
the return on equities is high to compensate investors for the risk of a rare
disaster (Rietz (1988)). An open question has therefore been whether the risk
is sufficiently high, and the rare disaster sufficiently severe, to quantitatively
explain the equity premium. Recently, however, Barro (2006) shows that it is
possible to explain the equity premium using such a model when the probability
of a rare disaster is calibrated to international data on large economic declines.
While the models of Rietz (1988)andBarro(2006)advanceourunderstanding
of the equity premium, they fall short in other respects. Most importantly, these
models predict that the volatility of stock market returns equals the volatility
of dividends. Numerous studies show, however, that this is not the case. In
fact, there is excess stock market volatility: the volatility of stock returns far
∗
Jessica A. Wachter is with the Department of Finance, The Wharton School. For helpful com-
ments, I thank Robert Barro, John Campbell, Mikhail Chernov, Gregory Duffee, Xavier Gabaix,
Paul Glasserman, Francois Gourio, Campbell Harvey, Dana Kiku, Bruce Lehmann, Christian
Juillard, Monika Piazzesi, Nikolai Roussanov, Jerry Tsai, Pietro Veronesi, and seminar partici-
pants at the 2008 NBER Summer Institute, the 2008 SED Meetings, the 2011 AFA Meetings,
Brown University, the Federal Reserve Bank of New York, MIT, University of Maryland, the
University of Southern California, and The Wharton School. I am grateful for financial support
from the Aronson+Johnson+Ortiz fellowship through the Rodney L. White Center for Financial
Research. Thomas Plank and Leonid Spesivtsev provided excellent research assistance.
1
Campbell (2003)extendsthisanalysistomultiplecountries.
DOI: 10.1111/jofi.12018
987
988 The Journal of Finance
R"
exceeds that of dividends (e.g., Shiller (1981), LeRoy and Porter (1981), Keim
and Stambaugh (1986), Campbell and Shiller (1988), Cochrane (1992), Hodrick
(1992)). While the models of Barro and Rietz address the equity premium
puzzle, they do not address this volatility puzzle.
In the original models of Barro (2006), agents have power utility and the en-
dowment process is subject to large and relatively rare consumption declines
(disasters). This paper proposes two modifications. First, rather than being
constant, the probability of a disaster is stochastic and varies over time. Sec-
ond, the representative agent, rather than having power utility preferences,
has recursive preferences. I show that such a model can generate volatility of
stock returns close to that in the data at reasonable values of the underlying
parameters. Moreover, the model implies reasonable values for the mean and
volatility of the government bill rate.
Both time-varying disaster probabilities and recursive preferences are neces-
sary to fit the model to the data. The role of t ime-varying disaster probabilities
is clear; the role of recursive preferences perhaps less so. Recursive preferences,
introduced by Kreps and Porteus (1978) and Epstein and Zin (1989), retain the
appealing scale-invariance of power utility but allow for separation between
the willingness to take on risk and the willingness to substitute over time.
Power utility requires that these aspects of preferences are driven by the same
parameter, leading to the counterfactual prediction that a high price–dividend
ratio predicts a high excess return. Increasing the agent’s willingness to substi-
tute over time reduces the effect of the disaster probability on the risk-free rate.
With recursive preferences, this can be accomplished without simultaneously
reducing the agent’s risk aversion.
The model in this paper allows for time-varying disaster probabilities and
recursive utility with unit elasticity of intertemporal substitution (EIS). The
assumption that the EIS is equal to one allows the model to be solved in
closed form up to an indefinite integral. A time-varying disaster probability is
modeled by allowing the intensity for jumps to follow a square-root process (Cox,
Ingersoll, and Ross (1985)). The solution for the model reveals that allowing
the probability of a disaster to vary not only implies a time-varying equity
premium, but also increases the level of the equity premium. The dynamic
nature of the model therefore leads the equity premium to be higher than what
static considerations alone would predict.
This model can quantitatively match high equity volatility and the pre-
dictability of excess stock returns by the price–dividend ratio. Generating long-
run predictability of excess stock returns without generating counterfactual
long-run predictability in consumption or dividend growth is a central chal-
lenge for general equilibrium models of the stock market. This model meets
the challenge: while stock returns are predictable, consumption and dividend
growth are only predictable ex post if a disaster actually occurs. Because disas-
ters occur rarely, the amount of consumption predictability is quite low, just as
in the data. A s econd challenge for models of this type is to generate volatility
in stock returns without counterfactual volatility in the government bill rate.
This model meets this challenge as well. The model is capable of matching
Time-V arying Risk of Rare Disasters 989
the low volatility of the government bill rate because of two competing effects.
When the risk of a disaster is high, rates of return fall because of precaution-
ary savings. However, the probability of government default (either outright
or through inflation) rises. Investors therefore require greater compensation to
hold government bills.
As I describe above, adding dynamics to the rare disaster framework allows
for a number of new insights. Note, however, that the dynamics in this paper
are relatively simple. A single state variable (the probability of a rare disaster)
drives all of the results in the model. This is parsimonious, but also unrealistic:
it implies, for instance, that the price–dividend ratio and the risk-free rate are
perfectly negatively correlated. It also implies a degree of comovement among
assets that would not hold in the data. In Section I.D,Isuggestwaysinwhich
this weakness might be overcome while still maintaining tractability.
Several recent papers also address the potential of rare disasters to explain
the aggregate stock market. Gabaix (2012)assumespowerutilityfortherep-
resentative agent, while also assuming the economy is driven by a linearity-
generating process (see Gabaix (2008)) that combines time-variation in the
probability of a rare disaster with time-variation in the degree to which div-
idends respond to a disaster. This set of assumptions allows him to derive
closed-form solutions for equity prices as well as for prices of other assets. In
Gabaix’s numerical calibration, only the degree to which dividends respond to
the disaster varies over time. Therefore, the economic mechanism driving stock
market volatility in Gabaix’s model is quite different from the one considered
here. Barro (2009) and Martin (2008) propose models with a constant disaster
probability and recursive utility. In contrast, the model considered here focuses
on the case of time-varying disaster probabilities. Longstaff and Piazzesi (2004)
propose a model in which consumption and the ratio between consumption and
the dividend are hit by contemporaneous downward jumps; the ratio between
consumption and dividends then reverts back to a long-run mean. They assume
aconstantjumpprobabilityandpowerutility.Incontemporaneousindependent
work, Gourio (2008b)specifiesamodelinwhichtheprobabilityofadisaster
varies between two discrete values. He solves this model numerically assum-
ing recursive preferences. A related approach is taken by Veronesi (2004), who
assumes that the drift rate of the dividend process follows a Markov switching
process, with a small probability of falling into a low state. While the physical
probability of a low state is constant, the representative investor’s subjective
probability is time-varying due to learning. Veronesi assumes exponential util-
ity; this allows for the inclusion of learning but makes it difficult to assess the
magnitude of the excess volatility generated through this mechanism.
In this paper, the conditional distribution of consumption growth becomes
highly nonnormal when a disaster is relatively likely. Thus, the paper also
relates to a literature that examines the effects of nonnormalities on risk
premia. Harvey and Siddique (2000)andDittmar(2002)examinetheroleof
higher-order moments on the cross-section; unlike the present paper, they take
the market return as given. Similarly to the present paper, Weitzman (2007)
constructs an endowment economy with nonnormal consumption growth.
990 The Journal of Finance
R"
His model differs from the present one in that he assumes independent and
identically distributed consumption growth (with a Bayesian agent learning
about the unknown variance), and he focuses on explaining the equity
premium.
Finally, this paper draws on a literature that derives asset pricing results
assuming endowment processes that include jumps, with a focus on option
pricing (an early reference is Naik and Lee ( 1990)). Liu, Pan, and Wang (2005)
consider an endowment process in which jumps occur with a constant inten-
sity; their focus is on uncertainty aversion but they also consider recursive
utility. My model departs from theirs in that the probability of a jump varies
over time. Drechsler and Yaron (2011)showthatamodelwithjumpsinthe
volatility of the consumption growth process can explain the behavior of im-
plied volatility and its relation to excess returns. Eraker and Shaliastovich
(2008)alsomodeljumpsinthevolatilityofconsumptiongrowth;theyfocuson
fitting the implied volatility curve. Both papers assume an EIS greater than
one and derive approximate analytical and numerical solutions. Santa-Clara
and Yan (2006) consider time-varying jump intensities, but restrict attention to
a model with power utility and implications for options. In contrast, the model
considered here focuses on recursive utility and implications for the aggregate
market.
The outline of the paper is as follows. Section I describes and solves the model,
Section II discusses the calibration and simulation, and Section III concludes.
I. Model
A. Assumptions
Iassumeanendowmenteconomywithaninfinitelylivedrepresentative
agent. This setup is standard, but I assume a novel process for the endowment.
Aggregate consumption (the endowment) follows the stochastic process
dC
t
= µC
t
−
dt + σ C
t
−
dB
t
+ (e
Z
t
− 1)C
t
−
dN
t
, (1)
where B
t
is a standard Brownian motion and N
t
is a Poisson process with
time-varying intensity λ
t
.
2
This intensity follows the process
dλ
t
= κ(
¯
λ − λ
t
) dt + σ
λ
!
λ
t
dB
λ,t
, (2)
where B
λ,t
is also a standard Brownian motion, and B
t
, B
λ,t
,andN
t
are assumed
to be independent. I assume Z
t
is a random variable whose time-invariant
distribution ν is independent of N
t
, B
t
,andB
λ,t
.IusethenotationE
ν
to denote
expectations of functions of Z
t
taken with respect to the ν-distribution. The t
subscript on Z
t
will be omitted when not essential for clarity.
Assumptions (1) and (2) define C
t
as a mixed jump-diffusion process. The
diffusion term µC
t
−
dt + σ C
t
−
dB
t
represents the behavior of consumption in
2
In what follows, all processes will be right continuous with left limits. Given a process x
t
,the
notation x
t
−
will denote lim
s↑t
x
s
,whilex
t
denotes lim
s↓t
x
s
.
Time-V arying Risk of Rare Disasters 991
normal times, and implies that, when no disaster takes place, log consumption
growth over an interval %t is normally distributed with mean (µ −
1
2
σ
2
)%t
and variance σ
2
%t.DisastersarecapturedbythePoissonprocessN
t
,which
allows for large instantaneous changes (“jumps”) in C
t
.Roughlyspeaking,λ
t
can be thought of as the disaster probability over the course of the next year.
3
In what follows, I refer to λ
t
as either the disaster intensity or the disaster
probability depending on the context; these terms should be understood to
have the same meaning. The instantaneous change in log consumption, should
adisasteroccur,isgivenbyZ
t
.Becausethefocusofthepaperisondisasters,
Z
t
is assumed t o be negative throughout.
In the model, a disaster is therefore a large negative shock to consumption.
The model is silent on the reason for such a decline in economic activity; ex-
amples include a fundamental change in government policy, a war, a financial
crisis, and a natural disaster. Given my focus on time-variation in the likeli-
hood of a disaster, it is probably most realistic to think of the disaster as caused
by human beings (that is, the first three examples given above, rather than a
natural disaster). The recent financial crisis in the United States illustrates
such time-variation: following the series of events in the fall of 2008, there was
much discussion of a second Great Depression, brought on by a freeze in the
financial s ystem. The conditional probability of a disaster s eemed higher, say,
than in 2006.
As Cox, Ingersoll, and Ross (1985)discuss,thesolutionto(2) has a station-
ary distribution provided that κ > 0and
¯
λ > 0. This stationary distribution is
Gamma with shape parameter 2κ
¯
λ/σ
2
λ
and scale parameter σ
2
λ
/(2κ). If 2κ
¯
λ >
σ
2
λ
,theFellercondition(fromFeller(1951)) is satisfied, implying a finite den-
sity at zero. The top panel of Figure 1 shows the probability density func-
tion corresponding to the stationary distribution. The bottom panel shows the
probability that λ
t
exceeds x as a function of x (the y-axis uses a log scale).
That is, the panel shows the difference between one and the cumulative dis-
tribution function for λ
t
. As this figure shows, the stationary distribution of
λ
t
is highly skewed. The skewness arises from the square root term multi-
plying the Brownian shock in (2):thissquareroottermimpliesthathigh
realizations of λ
t
make the process more volatile, and thus further high re-
alizations more likely than they would be under a standard autoregressive
process. The model therefore implies that there are times when “rare” dis-
asters can occur with high probability, but that these times are themselves
unusual.
Iassumethecontinuous-timeanalogueoftheutilityfunctiondefinedby
Epstein and Zin (1989)andWeil(1990)thatgeneralizespowerutilitytoallow
for preferences over the timing of the resolution of uncertainty. The continuous-
time version is formulated by Duffie and Epstein (1992); I make use of a limiting
3
More precisely, the probability of k jumps over the course of a short interval %t is approximately
equal to e
−λ
t
%t
(λ
t
%t)
k
k!
,wheret is measured in years. In the calibrations that follow, the average
value of λ
t
is 0.0355, implying a 0.0249 probability of a single jump over the course of a year, a
0.00044 probability of two jumps, and so forth.