scispace - formally typeset
Search or ask a question
Author

Richard G. Anderson

Bio: Richard G. Anderson is an academic researcher from Federal Reserve Bank of St. Louis. The author has contributed to research in topics: Monetary policy & Quantitative easing. The author has an hindex of 18, co-authored 86 publications receiving 1458 citations. Previous affiliations of Richard G. Anderson include Lindenwood University & Ohio State University.


Papers
More filters
ReportDOI
TL;DR: In this article, the authors examined a large panel of U.S. banks and developed quantitative estimates of the impact of sweep software programs on the demand for bank reserves, and found that sweep activity has allowed these banks to become "economically nonbound" and has reduced to zero the economic burden (tax) due to statutory reserve requirements.
Abstract: Since January 1994, the Federal Reserve Board has permitted depository institutions in the United States to implement so-called “retail sweep programs.” The essence of these programs is computer software that dynamically reclassifies customer deposits from transaction accounts, which are subject to statutory reserve-requirement ratios as high as 10 percent, to money market deposit accounts, which have a zero ratio. Through the use of such software, hundreds of banks have sharply reduced the amount of their required reserves. In many cases, this new lower requirement places no constraint on the bank because it is less than the amount of reserves (vault cash and deposits at the Federal Reserve) that the bank requires for its ordinary day-to-day business. In the terminology introduced by Anderson and Rasche (1996b), such deposit-sweeping activity has allowed these banks to become “economically nonbound” and has reduced to zero the economic burden (“tax”) due to statutory reserve requirements. In this analysis, we examine a large panel of U.S. banks and develop quantitative estimates of the impact of sweep software programs on the demand for bank reserves.

143 citations

ReportDOI
TL;DR: This article examined how recent changes in the U.S. financial system have affected the appropriate definition, construction and interpretation of the St. Louis adjusted monetary base and adjusted reserves, and suggested that measures of the monetary source base should be broadened to include all Federal Reserve deposits held by domestic depository institutions rather than just those deposits available to satisfy statutory reserve requirements.
Abstract: This paper examines how recent changes in the U.S. financial system have affected the appropriate definition, construction and interpretation of the St. Louis adjusted monetary base and adjusted reserves. Since 1990, reductions in statutory reserve requirements have significantly reduced the importance of the requirements as a constraint on the deposit and lending behavior of banks and other depository institutions. During the same period, depositories' interbank payments activities have come to determine most, if not all of their, demand for Federal Reserve Bank deposits. Our analysis suggests that measures of the monetary source base should be broadened to include all Federal Reserve deposits held by domestic depository institutions rather than just those deposits available to satisfy statutory reserve requirements, and that adjustments for the effects of changes in reserve requirements must recognize that many depositories' behavior is not affected by such requirements.

123 citations

ReportDOI
TL;DR: The adjusted monetary base is the sum of the monetary base and a reserve adjustment magnitude (RAM) that maps changes in reserve requirements into equivalent changes in the (unadjusted) monetary base.
Abstract: The Federal Reserve Bank of St. Louis' adjusted monetary base combines in a single index Federal Reserve actions that affect the supply base money -- open market operations, discount window lending and unsterilized foreign exchange market intervention -- with actions that affect depository institutions' demand for base money -- changes in statutory reserve requirements. The adjusted monetary base equals the sum of the monetary base and a reserve adjustment magnitude (RAM) that maps changes in reserve requirements into equivalent changes in the (unadjusted) monetary base. This paper presents a revised measure of the adjusted total reserves component of the monetary base and a new RAM. The revised measure of the adjusted reserves component differs from the current measure by including the aggregate amount of depository institutions' required clearing balance contracts with the Federal Reserve. The new RAM differs from the current RAM by recognizing that, since the Monetary Control Act of 1980, an increasing number of depository institutions have not significantly changed their demand for base money (vault cash and Federal Reserve deposits) relative to transactions deposits following changes in statutory reserve requirements. The new adjusted reserves data suggest that the stance of monetary policy, measured by the growth rate of adjusted reserves, has been more volatile since 1980 then suggested by the current measure.(This abstract was borrowed from another version of this item.)

108 citations

Journal ArticleDOI
TL;DR: In this paper, a reconstruction of the adjusted monetary base and adjusted bank reserves of the Federal Reserve Bank of St. Louis is presented, based on as much original source data as feasible, including changes to both the monetary (source) base and reserve adjustment magnitude (RAM).
Abstract: This article summarizes a reconstruction of the adjusted monetary base and adjusted bank reserves of the Federal Reserve Bank of St. Louis. The revised figures, based on as much original source data as feasible, include changes to both the monetary (source) base and reserve adjustment magnitude (RAM). The revised figures include the new measure of RAM developed by Anderson and Rasche (2001) that interprets the operation of retail-deposit sweep programs by U.S. banks, beginning in 1994, as economically equivalent to a reduction in statutory reserve requirements. We also present new seasonal adjustment factors that incorporate adjustments for the Y2K-related surge in the monetary base and reserves.

87 citations

Posted Content
TL;DR: In this paper, the authors examined a large panel of US banks and developed quantitative estimates of the impact of sweep software programs on the demand for bank reserves and showed that sweep software can significantly reduce the economic burden due to statutory reserve requirements.
Abstract: Since January 1994, the Federal Reserve Board has permitted depository institutions in the United States to implement so-called retail sweep programs The essence of these programs is computer software that dynamically reclassifies customer deposits between transaction accounts, which are subject to statutory reserve requirement ratios as high as 10 percent, and money market deposit accounts, which have a zero ratio Through the use of such software, hundreds of banks have sharply reduced the amount of their required reserves In some cases, this new level of required reserves is less than the amount that the bank requires for its ordinary, day-to-day business In the terminology introduced by Anderson and Rasche (1996b), such deposit-sweeping activity has allowed these banks to become "economically nonbound," and has reduced to zero the economic burden ("tax") due to statutory reserve requirements In this analysis, we examine a large panel of US banks and develop quantitative estimates of the impact of sweep software programs on the demand for bank reserves

85 citations


Cited by
More filters
BookDOI
TL;DR: For instance, King, Keohane, Verba, and Verba as mentioned in this paper have developed a unified approach to valid descriptive and causal inference in qualitative research, where numerical measurement is either impossible or undesirable.
Abstract: While heated arguments between practitioners of qualitative and quantitative research have begun to test the very integrity of the social sciences, Gary King, Robert Keohane, and Sidney Verba have produced a farsighted and timely book that promises to sharpen and strengthen a wide range of research performed in this field. These leading scholars, each representing diverse academic traditions, have developed a unified approach to valid descriptive and causal inference in qualitative research, where numerical measurement is either impossible or undesirable. Their book demonstrates that the same logic of inference underlies both good quantitative and good qualitative research designs, and their approach applies equally to each. Providing precepts intended to stimulate and discipline thought, the authors explore issues related to framing research questions, measuring the accuracy of data and uncertainty of empirical inferences, discovering causal effects, and generally improving qualitative research. Among the specific topics they address are interpretation and inference, comparative case studies, constructing causal theories, dependent and explanatory variables, the limits of random selection, selection bias, and errors in measurement. Mathematical notation is occasionally used to clarify concepts, but no prior knowledge of mathematics or statistics is assumed. The unified logic of inference that this book explicates will be enormously useful to qualitative researchers of all traditions and substantive fields.

6,233 citations

Journal ArticleDOI
01 May 1970

1,935 citations

Journal ArticleDOI
TL;DR: Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.
Abstract: There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

1,289 citations

Journal ArticleDOI
25 Mar 2016-Science
TL;DR: To contribute data about replicability in economics, 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014 are replicated, finding that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction.
Abstract: The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.

811 citations