scispace - formally typeset
Search or ask a question
Author

Adonis Yatchew

Other affiliations: Australian National University
Bio: Adonis Yatchew is an academic researcher from University of Toronto. The author has contributed to research in topics: Nonparametric statistics & Semiparametric regression. The author has an hindex of 21, co-authored 40 publications receiving 2983 citations. Previous affiliations of Adonis Yatchew include Australian National University.

Papers
More filters
Posted Content
TL;DR: In this article, a brief overview of the class of models under study and central theoretical issues such as the curse of dimensionality, the bias-variance trade-off and rates of convergence are discussed.
Abstract: This introduction to nonparametric regression emphasizes techniques that might be most accessible and useful to the applied economist. The paper begins with a brief overview of the class of models under study and central theoretical issues such as the curse of dimensionality, the bias-variance trade-off and rates of convergence. The paper then focuses on kernel and nonparametric least squares estimation of the nonparametric regression model, and optimal differencing estimation of the partial linear model. Constrained estimation and hypothesis testing is also discussed. Empirical examples include returns to scale in electricity distribution and hedonic pricing of housing attributes.

458 citations

Posted Content
01 Jan 2003
TL;DR: In this article, a collection of techniques for analyzing nonparametric and semiparametric regression models is provided, including simple goodness of fit tests and residual regression tests, which can be used to test hypotheses such as parametric and semi-parametric specifications, significance, monotonicity and additive separability.
Abstract: This book provides an accessible collection of techniques for analyzing nonparametric and semiparametric regression models. Worked examples include estimation of Engel curves and equivalence scales, scale economies, semiparametric Cobb-Douglas, translog and CES cost functions, household gasoline consumption, hedonic housing prices, option prices and state price density estimation. The book should be of interest to a broad range of economists including those working in industrial organization, labor, development, urban, energy and financial economics. A variety of testing procedures are covered including simple goodness of fit tests and residual regression tests. These procedures can be used to test hypotheses such as parametric and semiparametric specifications, significance, monotonicity and additive separability. Other topics include endogeneity of parametric and nonparametric effects, as well as heteroskedasticity and autocorrelation in the residuals. Bootstrap procedures are provided.

283 citations

Journal ArticleDOI
TL;DR: In this article, an elementary and asymptotically efficient estimator of β was proposed, where data are reordered so that the x's are close and higher order differencing is applied to remove f ( x ).

272 citations

Book
01 Jan 2003
TL;DR: In this article, a collection of techniques for analyzing nonparametric and semiparametric regression models is provided, including simple goodness of fit tests and residual regression tests, which can be used to test hypotheses such as parametric and semi-parametric specifications, significance, monotonicity and additive separability.
Abstract: This book provides an accessible collection of techniques for analyzing nonparametric and semiparametric regression models. Worked examples include estimation of Engel curves and equivalence scales, scale economies, semiparametric Cobb-Douglas, translog and CES cost functions, household gasoline consumption, hedonic housing prices, option prices and state price density estimation. The book should be of interest to a broad range of economists including those working in industrial organization, labor, development, urban, energy and financial economics. A variety of testing procedures are covered including simple goodness of fit tests and residual regression tests. These procedures can be used to test hypotheses such as parametric and semiparametric specifications, significance, monotonicity and additive separability. Other topics include endogeneity of parametric and nonparametric effects, as well as heteroskedasticity and autocorrelation in the residuals. Bootstrap procedures are provided.

246 citations


Cited by
More filters
Book ChapterDOI
01 Jan 1982
TL;DR: In this article, the authors discuss leading problems linked to energy that the world is now confronting and propose some ideas concerning possible solutions, and conclude that it is necessary to pursue actively the development of coal, natural gas, and nuclear power.
Abstract: This chapter discusses leading problems linked to energy that the world is now confronting and to propose some ideas concerning possible solutions. Oil deserves special attention among all energy sources. Since the beginning of 1981, it has merely been continuing and enhancing the downward movement in consumption and prices caused by excessive rises, especially for light crudes such as those from Africa, and the slowing down of worldwide economic growth. Densely-populated oil-producing countries need to produce to live, to pay for their food and their equipment. If the economic growth of the industrialized countries were to be 4%, even if investment in the rational use of energy were pushed to the limit and the development of nonpetroleum energy sources were also pursued actively, it would be extremely difficult to prevent a sharp rise in prices. It is evident that it is absolutely necessary to pursue actively the development of coal, natural gas, and nuclear power if a physical shortage of energy is not to block economic growth.

2,283 citations

Journal ArticleDOI
TL;DR: Monte Carlo analysis demonstrates that, for the types of hazards one often sees in substantive research, the polynomial approximation always outperforms time dummies and generally performs as well as splines or even more flexible autosmoothing procedures.
Abstract: Since Beck, Katz, and Tucker (1998), the standard method for modeling time dependence in binary data has been to incorporate time dummies or splined time in logistic regressions. Although we agree with the need for modeling time dependence, we demonstrate that time dummies can induce estimation problems due to separation. Splines do not suffer from these problems. However, the complexity of splines has led substantive researchers (1) to use knot values that may be inappropriate for their data and (2) to ignore any substantive discussion concerning temporal dependence. We propose a relatively simple alternative: including t, t 2 , and t 3 in the regression. This cubic polynomial approximation is trivial to implement—and, therefore, interpret—and it avoids problems such as quasi-complete separation. Monte Carlo analysis demonstrates that, for the types of hazards one often sees in substantive research, the polynomial approximation always outperforms time dummies and generally performs as well as splines or even more flexible autosmoothing procedures. Due to its simplicity, this method also accommodates nonproportional hazards in a straightforward way. We reanalyze Crowley and Skocpol (2001) using nonproportional hazards and find new empirical support for the historical-institutionalist perspective.

1,314 citations

Book ChapterDOI
TL;DR: While standard methods will not eliminate the bias when measurement errors are not classical, one can often use them to obtain bounds on this bias, and it is argued that validation studies allow us to assess the magnitude of measurement errors in survey data, and the validity of the classical assumption.
Abstract: Economists have devoted increasing attention to the magnitude and consequences of measurement error in their data. Most discussions of measurement error are based on the “classical” assumption that errors in measuring a particular variable are uncorrelated with the true value of that variable, the true values of other variables in the model, and any errors in measuring those variables. In this survey, we focus on both the importance of measurement error in standard survey-based economic variables and on the validity of the classical assumption. We begin by summarizing the literature on biases due to measurement error, contrasting the classical assumption and the more general case. We then argue that, while standard methods will not eliminate the bias when measurement errors are not classical, one can often use them to obtain bounds on this bias. Validation studies allow us to assess the magnitude of measurement errors in survey data, and the validity of the classical assumption. In principle, they provide an alternative strategy for reducing or eliminating the bias due to measurement error. We then turn to the work of social psychologists and survey methodologists which identifies the conditions under which measurement error is likely to be important. While there are some important general findings on errors in measuring recall of discrete events, there is less direct guidance on continuous variables such as hourly wages or annual earnings. Finally, we attempt to summarize the validation literature on specific variables: annual earnings, hourly wages, transfer income, assets, hours worked, unemployment, job characteristics like industry, occupation, and union status, health status, health expenditures, and education. In addition to the magnitude of the errors, we also focus on the validity of the classical assumption. Quite often, we find evidence that errors are negatively correlated with true values. The usefulness of validation data in telling us about errors in survey measures can be enhanced if validation data is collected for a random portion of major surveys (rather than, as is usually the case, for a separate convenience sample for which validation data could be obtained relatively easily); if users are more actively involved in the design of validation studies; and if micro data from validation studies can be shared with researchers not involved in the original data collection.

1,224 citations

Journal ArticleDOI
TL;DR: In this paper, a finite mixture approach to conditional logit models is developed in whichlatent classes are used to promoteunderstanding of systematic heterogeneity in wilderness recreation, and a branded choice experiment involvingchoice of one park from a demand system was administered to a sample of recreationists.
Abstract: A finite mixture approach toconditional logit models is developed in whichlatent classes are used to promoteunderstanding of systematic heterogeneity. The model is applied to wilderness recreationin which a branded choice experiment involvingchoice of one park from a demand system wasadministered to a sample of recreationists. The basis of membership in the classes orsegments in the sample involved attitudinalmeasures of motivations for taking a trip, aswell as their stated preferences overwilderness park attributes. The econometricanalysis suggested that four classes of peopleexist in the sample. Using the model toexamine welfare measures of some hypotheticalpolicy changes identified markedly differentwelfare effects than the standard singlesegment model, and provided insight into thedifferential impact of alternative policies.

1,167 citations

Journal ArticleDOI
TL;DR: The authors developed a structural model of the global market for crude oil that for the first time explicitly allows for shocks to the speculative demand for oil as well as shocks to flow demand and flow supply.
Abstract: SUMMARY We develop a structural model of the global market for crude oil that for the first time explicitly allows for shocks to the speculative demand for oil as well as shocks to flow demand and flow supply. The speculative component of the real price of oil is identified with the help of data on oil inventories. Our estimates rule out explanations of the 2003–2008 oil price surge based on unexpectedly diminishing oil supplies and based on speculative trading. Instead, this surge was caused by unexpected increases in world oil consumption driven by the global business cycle. There is evidence, however, that speculative demand shifts played an important role during earlier oil price shock episodes including 1979, 1986 and 1990. Our analysis implies that additional regulation of oil markets would not have prevented the 2003–2008 oil price surge. We also show that, even after accounting for the role of inventories in smoothing oil consumption, our estimate of the short-run price elasticity of oil demand is much higher than traditional estimates from dynamic models that do not account for for the endogeneity of the price of oil. Copyright © 2013 John Wiley & Sons, Ltd.

1,156 citations