scispace - formally typeset
Search or ask a question
Author

Barry E. Jones

Bio: Barry E. Jones is an academic researcher from Binghamton University. The author has contributed to research in topics: Divisia monetary aggregates index & Divisia index. The author has an hindex of 18, co-authored 51 publications receiving 963 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The monetary services index (MSI) project of the Federal Reserve Bank of St. Louis as mentioned in this paper provides a set of statistical index numbers based on economic aggregation and statistical index number theory.
Abstract: This is the second of two articles that describe the monetary services index (MSI) project of the Federal Reserve Bank of St. Louis. The project’s MSI database, which contains the monetary services index (MSI), its dual user cost index, and other related indexes and data, is available on the Bank’s World Wide Web server.1 To facilitate comparison with the monetary aggregates published by the Board of Governors of the Federal Reserve System, all of the indexes in the MSI database are provided for the same groupings of monetary assets as the Board’s M1, M2, M3, and L aggregates.2 Indexes are provided at monthly, quarterly, and annual frequencies. The St. Louis MSI database also contains all non-confidential data and computer programs used to construct the indexes. Unlike the Board of Governor’s monetary aggregates, the monetary services indexes and their dual user cost indexes are statistical index numbers, based on economic aggregation and statistical index number theory. The previous article in this Review, “Monetary Aggregation Theory and Statistical Index Numbers,” surveys the literature on monetary aggregation theory and the use of statistical index number theory in monetary economics. Here, we discuss the construction of the monetary services index and related indexes. In the first section, we define notation and introduce some key concepts that are used throughout the article. We emphasize the distinction between real and nominal monetary asset stocks and their user costs, and we review the concepts of the real monetary services index and its nominal dual user cost index. In the second section, we define each of the indexes in the monetary services indexes database, including the following: total expenditure on monetary assets; the nominal monetary services index; the real dual user cost index; the currency equivalent index; the simple sum index; and a set of indexes based on Theil’s (1967) stochastic approach to index number theory. We emphasize that it is important to distinguish between real and nominal monetary index numbers: The aggregation theory underlying the monetary services indexes and related indexes is developed in terms of the real stocks of monetary assets, but actual monetary asset stock data are collected in nominal terms. We conclude that it is appropriate to construct a nominal monetary services index and thereafter to produce an approximation to the real monetary services index by deflating the nominal index. In the third section, we describe the monetary asset stock data. We discuss the issue of weak separability, and we define the groupings of monetary assets for which we construct indexes. These groupings correspond to the assets contained in M1, M2, M3, and L, as well as the assets contained in M1A and MZM.3 Because the aggregates are nested—each broader aggregate contains all the components of the previous, narrower aggregate—we refer to the groupings as levels of aggregation. M1A is the narrowest level of aggregation and L the broadest. In the fourth section, we discuss the own rate of return data used in the Richard G. Anderson is an assistant vice president at the Federal Reserve Bank of St. Louis. Barry E. Jones and Travis D. Nesmith are Ph.D. candidates at Washington University in St. Louis and visiting scholars at the Federal Reserve Bank of St. Louis. Mary C. Lohmann, Kelly M. Morris, and Cindy A. Gleit provided research assistance.

82 citations

Posted Content
TL;DR: In this article, the role of sweep programs in properly measuring money is discussed and new monetary aggregates that adjust the conventional measures to account for the medium of exchange capability of funds in sweep programs are proposed.
Abstract: This paper focuses on the role of sweep programs in properly measuring money. We propose new monetary aggregates that adjust the conventional measures to account for the medium of exchange capability of funds in sweep programs. Using data on swept funds in retail and commercial demand deposit (DD) sweep programs, we provide time series of monthly data on the sweep-adjusted money measures. By the twenty-first century, DD sweeps have led to distortion in reported MZM of approximately 3 percent, 5 percent for M2, and 6 percent for M2M. Underreporting of M1 due to retail and DD sweep programs is almost 70 percent.

79 citations

Posted Content
TL;DR: The first of two from the Monetary Services Indices (MSI) Project at the Federal Reserve Bank of St. Louis as discussed by the authors is a survey of the microeconomic theory of the aggregation of monetary assets, bringing together results not otherwise readily available in a single source.
Abstract: This paper is the first of two from the Monetary Services Indices (MSI) Project at the Federal Reserve Bank of St. Louis. The second paper, Working Paper 96-008B, summarizes the methodology, construction and data sources for the an extensive new database of monetary services indices, often referred to as Divisia monetary aggregates, for the United States. This paper surveys the microeconomic theory of the aggregation of monetary assets, bringing together results that are not otherwise readily available in a single source. In addition to indices of the flow of monetary services, the Project's database contains dual user cost indices, measures of potential aggregation error in the monetary services indices, and measures of the stock of monetary wealth. An overview of the Project and the concept of monetary aggregation is included here as a preface. ; Earlier title: An introduction to monetary aggregation theory and statistical theory and statistical index numbers

71 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined the admissibility of monetary aggregate groupings for the US over 1993-2001, based upon weak separability, and investigated the impact of retail and commercial demand deposit sweep programs on the separability of monetary asset groupings.
Abstract: This paper examines the admissibility of monetary aggregate groupings for the US over 1993–2001, based upon weak separability. We investigate the impact of retail and commercial demand deposit sweep programs on the separability of monetary asset groupings. Weak separability is tested using the Swofford–Whitney and Fleissig–Whitney tests. We use Varian's measurement error adjustment procedure to eliminate violations of the Generalized Axiom of Revealed Preference (GARP). When funds from both retail and commercial demand deposit sweep programs are placed within checkable deposits, all groupings, narrow and broad, pass GARP and weak separability. For groupings based on conventional money measures, tests tend to favor broad aggregates.

65 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Using a Bayesian likelihood approach, the authors estimate a dynamic stochastic general equilibrium model for the US economy using seven macroeconomic time series, incorporating many types of real and nominal frictions and seven types of structural shocks.
Abstract: Using a Bayesian likelihood approach, we estimate a dynamic stochastic general equilibrium model for the US economy using seven macro-economic time series. The model incorporates many types of real and nominal frictions and seven types of structural shocks. We show that this model is able to compete with Bayesian Vector Autoregression models in out-of-sample prediction. We investigate the relative empirical importance of the various frictions. Finally, using the estimated model we address a number of key issues in business cycle analysis: What are the sources of business cycle fluctuations? Can the model explain the cross-correlation between output and inflation? What are the effects of productivity on hours worked? What are the sources of the "Great Moderation"?

3,155 citations

Journal ArticleDOI
01 May 1970

1,935 citations

01 Mar 1979
TL;DR: In this article, the authors developed the theory behind Krishnaiah and Schuurmann's theoretical work reported in their report Approximations to the Distributions of the Traces of Complex Multivariate Beta and F Matrices.
Abstract: : One use of spectral analysis of time series is to determine if two different time series are realizations from the same process This thesis develops the theory behind Krishnaiah and Schuurmann's theoretical work reported in their report Approximations to the Distributions of the Traces of Complex Multivariate Beta and F Matrices We take the trace of a test statistic calculated from the spectral density matrices of the time series and test it The thesis applies the theory to two small sample simulations (Author)

683 citations

Journal ArticleDOI
TL;DR: This work proposes to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights, which regularizes (stabilizes) the optimization problem, encourages sparse portfolios, and allows accounting for transaction costs.
Abstract: We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naive evenly weighted portfolio.

532 citations