scispace - formally typeset
Search or ask a question
Author

William H. Greene

Bio: William H. Greene is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 8216 citations.

Papers
More filters
Book
01 Jan 2009

8,216 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1-N portfolio.
Abstract: We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1-N portfolio. Of the 14 models we evaluate across seven empirical datasets, none is consistently better than the 1-N rule in terms of Sharpe ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from optimal diversification is more than offset by estimation error. Based on parameters calibrated to the US equity market, our analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and its extensions to outperform the 1-N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This suggests that there are still many "miles to go" before the gains promised by optimal portfolio choice can actually be realized out of sample. The Author 2007. Published by Oxford University Press on behalf of The Society for Financial Studies. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org, Oxford University Press.

2,809 citations

Journal ArticleDOI
TL;DR: In this article, the effects of common method variance (CMV) on parameter estimates in bivariate linear and multivariate linear, quadratic, and interaction regression models are analyzed. But the authors do not consider the effect of additional independent variables suffering from CMV.
Abstract: This research analyzes the effects of common method variance (CMV) on parameter estimates in bivariate linear, multivariate linear, quadratic, and interaction regression models. The authors demonstrate that CMV can either inflate or deflate bivariate linear relationships, depending on the degree of symmetry with which CMV affects the observed measures. With respect to multivariate linear relationships, they show that common method bias generally decreases when additional independent variables suffering from CMV are included in a regression equation. Finally, they demonstrate that quadratic and interaction effects cannot be artifacts of CMV. On the contrary, both quadratic and interaction terms can be severely deflated through CMV, making them more difficult to detect through statistical means.

2,094 citations

Journal ArticleDOI
TL;DR: A survey of automated text analysis for political science can be found in this article, where the authors provide guidance on how to validate the output of the models and clarify misconceptions and errors in the literature.
Abstract: Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation. Language is the medium for politics and political conflict. Candidates debate and state policy positions during a campaign. Once elected, representatives write and debate legislation. After laws are passed, bureaucrats solicit comments before they issue regulations. Nations regularly negotiate and then sign peace treaties, with language that signals the motivations and relative power of the countries involved. News reports document the day-to-day affairs of international relations that provide a detailed picture of conflict and cooperation. Individual candidates and political parties articulate their views through party platforms and manifestos. Terrorist groups even reveal their preferences and goals through recruiting materials, magazines, and public statements. These examples, and many others throughout political science, show that to understand what politics is about we need to know what political actors are saying and writing. Recognizing that language is central to the study of politics is not new. To the contrary, scholars of politics have long recognized that much of politics is expressed in words. But scholars have struggled when using texts to make inferences about politics. The primary problem is volume: there are simply too many political texts. Rarely are scholars able to manually read all the texts in even moderately sized corpora. And hiring coders to manually read all documents is still very expensive. The result is that

2,044 citations

Book
30 Jul 2007
TL;DR: In this article, the authors proposed a method to estimate causal effects by conditioning on observed variables to block backdoor paths in observational social science research, but the method is limited to the case of causal exposure and identification criteria for conditioning estimators.
Abstract: Part I. Causality and Empirical Research in the Social Sciences: 1. Introduction Part II. Counterfactuals, Potential Outcomes, and Causal Graphs: 2. Counterfactuals and the potential-outcome model 3. Causal graphs Part III. Estimating Causal Effects by Conditioning on Observed Variables to Block Backdoor Paths: 4. Models of causal exposure and identification criteria for conditioning estimators 5. Matching estimators of causal effects 6. Regression estimators of causal effects 7. Weighted regression estimators of causal effects Part IV. Estimating Causal Effects When Backdoor Conditioning Is Ineffective: 8. Self-selection, heterogeneity, and causal graphs 9. Instrumental-variable estimators of causal effects 10. Mechanisms and causal explanation 11. Repeated observations and the estimation of causal effects Part V. Estimation When Causal Effects Are Not Point Identified by Observables: 12. Distributional assumptions, set identification, and sensitivity analysis Part VI. Conclusions: 13. Counterfactuals and the future of empirical research in observational social science.

1,701 citations

Journal ArticleDOI
TL;DR: In a generic parametric framework, it is shown that 2SRI is consistent and 2SPS is not and can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

1,459 citations