Author

# James D. Hamilton

Other affiliations: University of California, Los Angeles, University of Chicago, University of Virginia ...read more

Bio: James D. Hamilton is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Monetary policy & Interest rate. The author has an hindex of 75, co-authored 186 publications receiving 62810 citations. Previous affiliations of James D. Hamilton include University of California, Los Angeles & University of Chicago.

Topics: Monetary policy, Interest rate, Futures contract, Yield curve, Inference

##### Papers published on a yearly basis

##### Papers

More filters

••

[...]

TL;DR: A ordered sequence of events or observations having a time component is called as a time series, and some good examples are daily opening and closing stock prices, daily humidity, temperature, pressure, annual gross domestic product of a country and so on.

Abstract: Preface1Difference Equations12Lag Operators253Stationary ARMA Processes434Forecasting725Maximum Likelihood Estimation1176Spectral Analysis1527Asymptotic Distribution Theory1808Linear Regression Models2009Linear Systems of Simultaneous Equations23310Covariance-Stationary Vector Processes25711Vector Autoregressions29112Bayesian Analysis35113The Kalman Filter37214Generalized Method of Moments40915Models of Nonstationary Time Series43516Processes with Deterministic Time Trends45417Univariate Processes with Unit Roots47518Unit Roots in Multivariate Time Series54419Cointegration57120Full-Information Maximum Likelihood Analysis of Cointegrated Systems63021Time Series Models of Heteroskedasticity65722Modeling Time Series with Changes in Regime677A Mathematical Review704B Statistical Tables751C Answers to Selected Exercises769D Greek Letters and Mathematical Symbols Used in the Text786Author Index789Subject Index792

10,011 citations

••

TL;DR: In this article, the parameters of an autoregression are viewed as the outcome of a discrete-state Markov process, and an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter is presented.

Abstract: This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the mean growth rate of a nonstationary series may be subject to occasional, discrete shifts. The econometrician is presumed not to observe these shifts directly, but instead must draw probabilistic inference about whether and when they may have occurred based on the observed behavior of the series. The paper presents an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter

9,189 citations

••

TL;DR: The authors found that all but one of the U.S. recessions since World War II have been preceded, typically with a lag of around three-fourths of a year, by a dramatic increase in the price of crude petroleum.

Abstract: All but one of the U.S. recessions since World War II have been preceded, typically with a lag of around three-fourths of a year, by a dramatic increase in the price of crude petroleum. This does not mean that oil shocks caused these recessions. Evidence is presented, however, that even over the period 1948-72 this correlation is statistically significant and nonspurious, supporting the proposition that oil shocks were a contributing factor in at least some of the U.S. recessions prior to 1972. By extension, energy price increases may account for much of post-OPEC macroeconomic performance.

3,391 citations

••

TL;DR: An EM algorithm for obtaining maximum likelihood estimates of parameters for processes subject to discrete shifts in autoregressive parameters, with the shifts themselves modeled as the outcome of a discrete-valued Markov process is introduced.

Abstract: This paper introduces an EM algorithm for obtaining maximum likelihood estimates of parameters for processes subject to discrete shifts in autoregressive parameters, with the shifts themselves modeled as the outcome of a discrete-valued Markov process. The simplicity of the EM algorithm permits potential application of the approach to large vector systems.

2,013 citations

##### Cited by

More filters

•

01 Jan 2001

TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).

Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

•

08 Sep 2000TL;DR: This book presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects, and provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data.

Abstract: The increasing volume of data in modern business and science calls for more complex and sophisticated tools. Although advances in data mining technology have made extensive data collection much easier, it's still always evolving and there is a constant need for new techniques and tools that can help us transform this data into useful information and knowledge. Since the previous edition's publication, great advances have been made in the field of data mining. Not only does the third of edition of Data Mining: Concepts and Techniques continue the tradition of equipping you with an understanding and application of the theory and practice of discovering patterns hidden in large data sets, it also focuses on new, important topics in the field: data warehouses and data cube technology, mining stream, mining social networks, and mining spatial, multimedia and other complex data. Each chapter is a stand-alone guide to a critical topic, presenting proven algorithms and sound implementations ready to be used directly or with strategic modification against live data. This is the resource you need if you want to apply today's most powerful data mining techniques to meet real business challenges. * Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects. * Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields. *Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data

23,600 citations

••

TL;DR: In this article, a unit root test for dynamic heterogeneous panels based on the mean of individual unit root statistics is proposed, which converges in probability to a standard normal variate sequentially with T (the time series dimension) →∞, followed by N (the cross sectional dimension)→∞.

Abstract: This paper proposes unit root tests for dynamic heterogeneous panels based on the mean of individual unit root statistics. In particular it proposes a standardized t-bar test statistic based on the (augmented) Dickey–Fuller statistics averaged across the groups. Under a general setting this statistic is shown to converge in probability to a standard normal variate sequentially with T (the time series dimension) →∞, followed by N (the cross sectional dimension) →∞. A diagonal convergence result with T and N→∞ while N/T→k, k being a finite non-negative constant, is also conjectured. In the special case where errors in individual Dickey–Fuller (DF) regressions are serially uncorrelated a modified version of the standardized t-bar statistic is shown to be distributed as standard normal as N→∞ for a fixed T, so long as T>5 in the case of DF regressions with intercepts and T>6 in the case of DF regressions with intercepts and linear time trends. An exact fixed N and T test is also developed using the simple average of the DF statistics. Monte Carlo results show that if a large enough lag order is selected for the underlying ADF regressions, then the small sample performances of the t-bar test is reasonably satisfactory and generally better than the test proposed by Levin and Lin (Unpublished manuscript, University of California, San Diego, 1993).

12,838 citations

••

[...]

TL;DR: This paper provides a concise overview of time series analysis in the time and frequency domains with lots of references for further reading.

Abstract: Any series of observations ordered along a single dimension, such as time, may be thought of as a time series. The emphasis in time series analysis is on studying the dependence among observations at different points in time. What distinguishes time series analysis from general multivariate analysis is precisely the temporal order imposed on the observations. Many economic variables, such as GNP and its components, price indices, sales, and stock returns are observed over time. In addition to being interested in the contemporaneous relationships among such variables, we are often concerned with relationships between their current and past values, that is, relationships over time.

9,919 citations

••

TL;DR: In this article, the parameters of an autoregression are viewed as the outcome of a discrete-state Markov process, and an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter is presented.

Abstract: This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the mean growth rate of a nonstationary series may be subject to occasional, discrete shifts. The econometrician is presumed not to observe these shifts directly, but instead must draw probabilistic inference about whether and when they may have occurred based on the observed behavior of the series. The paper presents an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter

9,189 citations