scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 2001"


Journal ArticleDOI
TL;DR: Data Analysis by Resampling is a useful and clear introduction to resampling that would make an ambitious second course in statistics or a good third or later course and is quite well suited for self-study by an individual with just a few previous statistics courses.
Abstract: described and related to one another and to the different resampling methods is also notable. This is especially useful for the book’s target audience, for whom such concepts may not yet have taken root. On the computational side, the book may be a little less satisfying. Stepby-step computational algorithms are at some times inefŽ cient and at other times cryptic so that an individual with little programming experience might have difŽ culty applying them. This problem is substantially offset by the presence of numerous detailed examples solved using existing software, providing readers roughly equal exposure to S-PLUS, SC, and Resampling Stats. Unfortunately, these examples often require large, complex programs, demonstrating as much as anything a need for better resampling software. On the whole, Data Analysis by Resampling is a useful and clear introduction to resampling. It would make an ambitious second course in statistics or a good third or later course. It is quite well suited for self-study by an individual with just a few previous statistics courses. Although it would be miscast as a graduate-level textbook or as a research reference—for one thing it lacks a thorough bibliography to make up for its surface treatment of many of the topics it covers—it is a very nice book for any reader seeking an introductory book on resampling.

1,840 citations


Journal ArticleDOI
TL;DR: Finite-time control problems for linear systems subject to time-varying parametric uncertainties and to exogenous constant disturbances are considered and a sufficient condition for robust finite-time stabilization via state feedback is provided.

839 citations


Journal ArticleDOI
TL;DR: The present paper’s principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes, and introduces improved confidence intervals that improve on the parametric and percentile-based bootstrap confidence intervals previously used.
Abstract: The psychometric function relates an observer's performance to an independent variable, usually a physical quantity of an experimental stimulus Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods The present paper's principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested Third, we show how one's choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used Software implementing our methods is available

838 citations


BookDOI
01 Sep 2001
TL;DR: This paper presents nonparametric Descriptive Methods to Check Parametric Assumptions in Exponential Models of Time-Dependence and Semi-Parametric Transition Rate Models, which are based on Exponential Transition rate models from the TDA.
Abstract: Contents: Preface. Introduction. Event History Data Structures. Nonparametric Descriptive Methods. Exponential Transition Rate Models. Piecewise Constant Exponential Models. Exponential Models With Time-Dependent Covariates. Parametric Models of Time-Dependence. Methods to Check Parametric Assumptions. Semi-Parametric Transition Rate Models. Problems of Model Specification. Appendix: Basic Information About TDA.

785 citations


Patent
31 Aug 2001
TL;DR: In this paper, the authors present a system for measuring wireless device and wireless network usage and performance metrics, which includes at least one wireless device, and data gathering software installed on the wireless device for collecting device parametric data.
Abstract: Systems and methods for measuring wireless device and wireless network usage and performance metrics are set forth. The system includes at least one wireless device, and data gathering software installed on the wireless device for collecting device parametric data, network parametric data, event data. A control center may receive, store and process said device parametric data, network parametric data, and event data.

784 citations


Journal ArticleDOI
TL;DR: A simulation-based approximate dynamic programming method for pricing complex American-style options, with a possibly high-dimensional underlying state space, and a related method which uses a single (parameterized) value function, which is a function of the time-state pair.
Abstract: We introduce and analyze a simulation-based approximate dynamic programming method for pricing complex American-style options, with a possibly high-dimensional underlying state space. We work within a finitely parameterized family of approximate value functions, and introduce a variant of value iteration, adapted to this parametric setting. We also introduce a related method which uses a single (parameterized) value function, which is a function of the time-state pair, as opposed to using a separate (independently parameterized) value function for each time. Our methods involve the evaluation of value functions at a finite set, consisting of "representative" elements of the state space. We show that with an arbitrary choice of this set, the approximation error can grow exponentially with the time horizon (time to expiration). On the other hand, if representative states are chosen by simulating the state process using the underlying risk-neutral probability distribution, then the approximation error remains bounded.

695 citations


Journal ArticleDOI
TL;DR: The purpose in this paper is to provide a general approach to model selection via penalization for Gaussian regression and to develop the point of view about this subject.
Abstract: Our purpose in this paper is to provide a general approach to model selection via penalization for Gaussian regression and to develop our point of view about this subject. The advantage and importance of model selection come from the fact that it provides a suitable approach to many different types of problems, starting from model selection per se (among a family of parametric models, which one is more suitable for the data at hand), which includes for instance variable selection in regression models, to nonparametric estimation, for which it provides a very powerful tool that allows adaptation under quite general circumstances. Our approach to model selection also provides a natural connection between the parametric and nonparametric points of view and copes naturally with the fact that a model is not necessarily true. The method is based on the penalization of a least squares criterion which can be viewed as a generalization of Mallows’Cp. A large part of our efforts will be put on choosing properly the list of models and the penalty function for various estimation problems like classical variable selection or adaptive estimation for various types of lp-bodies.

560 citations


Journal ArticleDOI
01 Sep 2001-Geoderma
TL;DR: Two new criteria (exceedence probability plot and narrowness of probability intervals that include the true values) are presented to assess the accuracy and precision of local uncertainty models using cross-validation.

514 citations


Journal ArticleDOI
TL;DR: The effectiveness of the proposed controller design methodology is finally demonstrated through numerical simulations on the chaotic Lorenz system, which has complex nonlinearity.
Abstract: Addresses the robust fuzzy control problem for nonlinear systems in the presence of parametric uncertainties. The Takagi-Sugeno (T-S) fuzzy model is adopted for fuzzy modeling of the nonlinear system. Two cases of the T-S fuzzy system with parametric uncertainties, both continuous-time and discrete-time cases are considered. In both continuous-time and discrete-time cases, sufficient conditions are derived for robust stabilization in the sense of Lyapunov asymptotic stability, for the T-S fuzzy system with parametric uncertainties. The sufficient conditions are formulated in the format of linear matrix inequalities. The T-S fuzzy model of the chaotic Lorenz system, which has complex nonlinearity, is developed as a test bed. The effectiveness of the proposed controller design methodology is finally demonstrated through numerical simulations on the chaotic Lorenz system.

510 citations


Book ChapterDOI
TL;DR: In this article, the authors provide a review of linear panel data models with predetermined variables and compare the identification from moment conditions in each case, and the implications of alternative feedback schemes for the time series properties of the errors.
Abstract: This chapter focuses on two of the developments in panel data econometrics since the Handbook chapter by Chamberlain (1984). The first objective of this chapter is to provide a review of linear panel data models with predetermined variables. We discuss the implications of assuming that explanatory variables are predetermined as opposed to strictly exogenous in dynamic structural equations with unobserved heterogeneity. We compare the identification from moment conditions in each case, and the implications of alternative feedback schemes for the time series properties of the errors. We next consider autoregressive error component models under various auxiliary assumptions. There is a trade-off between robustness and efficiency since assumptions of stationary initial conditions or time series homoskedasticity can be very informative, but estimators are not robust to their violation. We also discuss the identification problems that arise in models with predetermined variables and multiple effects. Concerning inference in linear models with predetermined variables, we discuss the form of optimal instruments, and the sampling properties of GMM and LIML-analogue estimators drawing on Monte Carlo results and asymptotic approximations. A number of identification results for limited dependent variable models with fixed effects and strictly exogenous variables are available in the literature, as well as some results on consistent and asymptotically normal estimation of such models. There are also some results available for models of this type including lags of the dependent variable, although even less is known for nonlinear dynamic models. Reviewing the recent work on discrete choice and selectivity models with fixed effects is the second objective of this chapter. A feature of parametric limited dependent variable models is their fragility to auxiliary distributional assumptions. This situation prompted the development of a large literature dealing with semiparametric alternatives (reviewed in Powell, 1994’s chapter). The work that we review in the second part of the chapter is thus at the intersection of the panel data literature and that on cross-sectional semiparametric limited dependent variable models.

509 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a new test of a parametric model of a conditional mean function against a nonparametric alternative, which adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric models converges to zero at the fastest possible rate.
Abstract: We develop a new test of a parametric model of a conditional mean function against a nonparametric alternative. The test adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric model converges to zero at the fastest possible rate. This rate is slower than n -1/2 . Some existing tests have nontrivial power against restricted classes of alternatives whose distance from the parametric model decreases at the rate n -1/2 . There are, however, sequences of alternatives against which these tests are inconsistent and ours is consistent. As a consequence, there are alternative models for which the finite-sample power of our test greatly exceeds that of existing tests. This conclusion is illustrated by the results of some Monte Carlo experiments.

Journal ArticleDOI
TL;DR: This paper concludes with a discussion of the inherent advantages of kernel estimation techniques and systematic errors associated with the estimation of parent distributions.

Journal ArticleDOI
TL;DR: In this article, two globally stable control algorithms for robust stabilization of spacecraft in the presence of control input saturation, parametric uncertainty, and external disturbances are proposed and a detailed stability analysis for the resulting closed-loop system is provided.
Abstract: In this paper we propose two globally stable control algorithms for robust stabilization of spacecraft in the presence of control input saturation, parametric uncertainty, and external disturbances. The control algorithms are based on variable structure control design and have the following properties: 1 ) fast and accurate response in thepresenceofbounded external disturbancesand parametricuncertainty;2 )explicit accounting forcontrol input saturation; 3 ) computational simplicity and straightforward tuning. We include a detailed stability analysis for the resulting closed-loop system. The stability proof is based on a Lyapunov-like analysis and the properties of the quaternion representation of spacecraft dynamics. It is also shown that an adaptive version of the proposed controller results in substantially simpler stability analysis and improved overall response. We also include numerical simulations to illustrate the spacecraft performance obtained using the proposed controllers.

Book
28 Nov 2001
TL;DR: In this article, a semi-parametric FTR model was used to estimate the probability of failure in an AFT model under the AFT Model under the GPH1 model.
Abstract: Failure Time Distributions Introduction Parametric Classes of Failure Time Distributions Accelerated Life Models Introduction Generalized Sedyakin's Model Accelerated Failure Time Model Proportional Hazards Model Generalized Proportional Hazards Models Generalized Additive and Additive-Multiplicative Hazards Models Changing Shape and Scale Models Generalizations Models Including Switch-Up and Cycling Effects Heredity Hypothesis Summary Accelerated Degradation Models Introduction Degradation Models Modeling the Influence of Explanatory Variables on Degradation Modeling the Traumatic Event Process Maximum Likelihood Estimation for FTR Data Censored Failure Time Data Parametric Likelihood Function for Right Censored FTR Data Score Function Asymptotic Properties of the Maximum Likelihood Estimators Approximate Confidence Intervals Some Remarks on Semi-Parametric Estimation AFT Model: Parametric FTR and ALT Data Analysis Parametrization of the AFT Model Interpretation of the Regression Coefficients FTR Data Analysis: Scale-Shape Families of Distributions FTR Data Analysis: Generalized Weibull Distribution FTR Data Analysis: Exponential Distribution Plans of Experiments in Accelerated Life Testing Parametric Estimation in ALT Under the AFT Model AFT Models: Semi-Parametric FTR and AFT Data Analysis FTR Data Analysis Semi-Parametric Estimation in ALT PH Model: Semi-Parametric FTR Data Analysis Introduction Parametrization of the PH Model Interpretation of the Regression Coefficients Semi-Parametric FTR Data Analysis for the PH Model GPH Models: FTR Analysis Introduction Semi-Parametric FTR Data Analysis for the GPH1 Models Semi-Parametric FTR Data Analysis: Intersecting Hazards Changing Scale and Shape Model Parametric FTR Data Analysis Semi-Parametric FTR Data Analysis Semi-Parametric Estimation in ALT GAH and GAMH Model: Semi-Parametric FTR and ALT Data Analysis GAH Model GAMH Model AAR Model PPAR Model Estimation When a Process of Production in Unstable Application of the AFT Model Application of the GPH1 Model Goodness-of-Fit for Accelerated Life Models Goodness-of-Fit for the GS Model Goodness-of-Fit for the Model with Absence of Memory Goodness-of-Fit for the AFT Model Goodness-of-Fit for the PH Model Goodness-of-Fit for the GPH Models Goodness-of-Fit for the Parametric Regression Models Estimation in Degradation Models with Explanatory Variables Introduction Linear Path Models Gamma and Shock Processes Some Results from Stochastic Process Theory Stochastic Process. Filtration Counting Process Stochastic Integral Conditional Expectation Martingale Predictable Process and Doob-Meyer Decomposition Predictable Variation and Predictable Covariation Stochastic Integrals with Respect to Martingales Localization Stochastic Integrals with Respect to Martingales (continuation) Weak Convergence Central Limit Theorem for Martingales Non-Parametric Estimators of the Cumulative Hazard and the Survival Function Product-Integral Delta Method References

Journal ArticleDOI
Li Xu1, Bin Yao1
TL;DR: In this article, a discontinuous projection based adaptive robust controller (ARC) is proposed to guarantee a prescribed transient performance and final tracking accuracy in general, while achieving asymptotic tracking in the presence of parametric uncertainties.
Abstract: This paper studies the high performance robust motion control of an epoxy core linear motor, which has negligible electrical dynamics due to the fast response of the electrical subsystem. A discontinuous projection based adaptive robust controller (ARC) is first constructed. The controller theoretically guarantees a prescribed transient performance and final tracking accuracy in general, while achieving asymptotic tracking in the presence of parametric uncertainties. A desired compensation ARC scheme is then presented, in which the regressor is calculated using the reference trajectory information only. The resulting controller has several implementation advantages such as less online computation time, reduced effect of measurement noise, a separation of robust control design from parameter adaptation, and a faster adaptation rate. Both schemes are implemented and compared on an epoxy core linear motor. Extensive comparative experimental results are presented to illustrate the effectiveness and the achievable control performance of the two ARC designs.

Journal ArticleDOI
TL;DR: The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interior-point methods for convex optimization.
Abstract: This note presents a new approach to finite-horizon guaranteed state prediction for discrete-time systems affected by bounded noise and unknown-but-bounded parameter uncertainty. Our framework handles possibly nonlinear dependence of the state-space matrices on the uncertain parameters. The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interior-point methods for convex optimization. With n states, l uncertain parameters appearing linearly in the state-space matrices, with rank-one matrix coefficients, the worst-case complexity grows as O(l(n + l)/sup 3.5/) With unstructured uncertainty in all system matrices, the worst-case complexity reduces to O(n/sup 3.5/).

Journal ArticleDOI
TL;DR: The authors make the case that nonparametric techniques need not be limited to use by econometricians, and they also discuss the use of non-parametric regression for density estimation, which concerns estimation of regression functions without the straightjacket of a specific functional form.
Abstract: Even a cursory look at the empirical literature in most fields of economics reveals that a majority of applications use simple parametric approaches such as ordinary least squares regression or two-stage least squares accompanied by simple descriptive statistics. The use of such methods has persisted despite the development of more general nonparametric techniques in the recent (and perhaps not-so-recent) statistics and econometrics literatures. At least two possible explanations for this come to mind. First, given the challenges— or lack of—provided by economic theories with empirical content, the parametric toolkit is more than sufficient. Where serious first-order problems in nonexperimental inference exist, they are in the inadequacy of the research design and data, not in the limitations of the parametric approach. Second, the predominant use of parametric approaches may reflect the lack of sufficient computational power or the difficulty of computation with off-the-shelf statistical software. Given the recent advances in computing power and software (as well as the development of the necessary theoretical foundation), only the first point remains an open question. The purpose of this article is to make the case that nonparametric techniques need not be limited to use by econometricians. Our discussion is divided into two parts. In the first part, we focus on “density estimation”— estimation of the entire distribution of a variable or set of variables. In the second part, we discuss nonparametric regression, which concerns estimation of regression functions without the straightjacket of a specific functional form.

Journal ArticleDOI
TL;DR: In this paper, the authors review applications of stochastic matrix models to problems in conservation and use simulation studies to compare the performance of different analytic methods currently in use, finding that model conclusions are likely to be robust to the choice of parametric distribution used to model vital rate fluctuations over time.
Abstract: Stochastic matrix models are frequently used by conservation biologists to measure the viability of species and to explore various management actions. Models are typically parameterized using two or more sets of estimated transition rates between age/size/stage classes. While standard methods exist for analyzing a single set of transition rates, a variety of methods have been employed to analyze multiple sets of transition rates. We review applications of stochastic matrix models to problems in conservation and use simulation studies to compare the performance of different analytic methods currently in use. We find that model conclusions are likely to be robust to the choice of parametric distribution used to model vital rate fluctuations over time. However, conclusions can be highly sensitive to the within-year correlation structure among vital rates, and therefore we suggest using analytical methods that provide a means of conducting a sensitivity analysis with respect to correlation parameters. Our simulation results also suggest that the precision of population viability estimates can be improved by using matrix models that incorporate environmental covariates in conjunction with experiments to estimate transition rates under a range of environmental conditions.

Proceedings ArticleDOI
18 Sep 2001
TL;DR: It is shown that the input-output decoupling problem is not solvable for this model by means of a static state feedback control law and a dynamic feedback controller is developed which renders the closed-loop system linear, controllable and noninteractive after a change of coordinates in the state-space.
Abstract: Presents a nonlinear dynamic model for a four rotors helicopter in a form suited for control design. We show that the input-output decoupling problem is not solvable for this model by means of a static state feedback control law. Then, a dynamic feedback controller is developed which renders the closed-loop system linear, controllable and noninteractive after a change of coordinates in the state-space. Finally, the stability and the robustness of the proposed control law in the presence of wind, turbulences and parametric uncertainties is analyzed through a simulated case study.

Journal ArticleDOI
TL;DR: In this paper, the Akaike Information Criterion (AIC) was used to identify the optimum distribution, i.e., the distribution and trend function, which enabled an identification of the optimum non-stationary FFM in a class of 56 competing models.

Journal ArticleDOI
TL;DR: In this article, a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family is proposed, which has a X2-type of null limiting distribution and is asymptotic most powerful under local alternatives.
Abstract: Summary. Testing for homogeneity in finite mixture models has been investigated by many researchers. The asymptotic null distribution of the likelihood ratio test (LRT) is very complex and difficult to use in practice. We propose a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family. The modified LRT has a X2-type of null limiting distribution and is asymptotically most powerful under local alternatives. Simulations show that it performs better than competing tests. They also reveal that the limiting distribution with some adjustment can satisfactorily approximate the quantiles of the test statistic, even for moderate sample sizes.

Journal ArticleDOI
David Scott1
TL;DR: This article investigates the use of integrated square error, or L2 distance, as a theoretical and practical estimation tool for a variety of parametric statistical models and demonstrates by example the well-known result that minimum distance estimators, including L2E, are inherently robust.
Abstract: The likelihood function plays a central role in parametric and Bayesian estimation, as well as in nonparametric function estimation via local polynomial modeling. However, integrated square error has enjoyed a long tradition as the goodness-of-fit criterion of choice in nonparametric density estimation. In this article, I investigate the use of integrated square error, or L2 distance, as a theoretical and practical estimation tool for a variety of parametric statistical models. I show that the asymptotic inefficiency of the parameters estimated by minimizing the integrated square error or L2estimation (L2E) criterion versus the maximum likelihood estimator is roughly that of the median versus the mean. I demonstrate by example the well-known result that minimum distance estimators, including L2E, are inherently robust; however, L2E does not require specification of any tuning factors found in robust likelihood algorithms. L2E is particularly appropriate for analyzing massive datasets in which data cleanin...

Journal ArticleDOI
TL;DR: This work addresses the problem of stability of a vehicle string in the presence of parametric uncertainty and presents a Lyapunov-based decentralized adaptive control algorithm to compensate for such parametric variations.
Abstract: An important aspect of an automated highway system design is the synthesis of an automatic vehicle following system. Associated with automatic vehicle following systems is the problem of the stability of a string of vehicles, i.e., the problem of spacing error propagation, and in some cases, amplification upstream from one vehicle to another, due to some disturbance at the head of the string. Realistic vehicle following designs must also address parametric uncertainties such as mass of the vehicle, aerodynamic drag, and tire drag. The mass of the vehicle varies with the number of passengers. At small intervehicular separations, aerodynamic drag force changes significantly with the distance to be maintained. We address the problem of stability of a vehicle string in the presence of parametric uncertainty and present a Lyapunov-based decentralized adaptive control algorithm to compensate for such parametric variations. We examine this direct adaptive control algorithm for platoon performance and parameter convergence. We present the simulation results to demonstrate the effectiveness of the adaptive controller.

Journal ArticleDOI
TL;DR: In this paper, the authors fit three-level random-intercept models to actual data for two binary outcomes, to assess whether refined approximation procedures, namely penalized quasi-likelihood and second-order improvements to marginal and penalized likelihood, also underestimate the underlying parameters.
Abstract: Summary. During recent years, analysts have been relying on approximate methods of inference to estimate multilevel models for binary or count data. In an earlier study of random-intercept models for binary outcomes we used simulated data to demonstrate that one such approximation, known as marginal quasi-likelihood, leads to a substantial attenuation bias in the estimates of both fixed and random effects whenever the random effects are non-trivial. In this paper, we fit three-level randomintercept models to actual data for two binary outcomes, to assess whether refined approximation procedures, namely penalized quasi-likelihood and second-order improvements to marginal and penalized quasi-likelihood, also underestimate the underlying parameters. The extent of the bias is assessed by two standards of comparison: exact maximum likelihood estimates, based on a Gauss-Hermite numerical quadrature procedure, and a set of Bayesian estimates, obtained from Gibbs sampling with diffuse priors. We also examine the effectiveness of a parametric bootstrap procedure for reducing the bias. The results indicate that second-order penalized quasi-likelihood estimates provide a considerable improvement over the other approximations, but all the methods of approximate inference result in a substantial underestimation of the fixed and random effects when the random effects are sizable. We also find that the parametric bootstrap method can eliminate the bias but is computationally very intensive.

Posted Content
TL;DR: In this article, the authors considered the problem of estimating a partially linear semiparametric fixed eects panel data model with possible endogeneity and established the root N normality result for the estimator of the parametric component.
Abstract: This paper considers the problem of estimating a partially linear semiparametric fixed eects panel data model with possible endogeneity. Using the series method, we establish the root N normality result for the estimator of the parametric component, and we show that the unknown function can be consistently estimated at the standard nonparametric rate.

Journal ArticleDOI
Jorma Rissanen1
TL;DR: It is shown that the normalized maximum-likelihood distribution as a universal code for a parametric class of models is closest to the negative logarithm of the maximized likelihood in the mean code length distance, where the mean is taken with respect to the worst case model inside or outside theparametric class.
Abstract: We show that the normalized maximum-likelihood (NML) distribution as a universal code for a parametric class of models is closest to the negative logarithm of the maximized likelihood in the mean code length distance, where the mean is taken with respect to the worst case model inside or outside the parametric class. We strengthen this result by showing that, when the data generating models are restricted to be the most "benevolent" ones in that they incorporate all the constraints in the data and no more, the bound cannot be beaten in essence by any code except when the mean is taken with respect to the data generating models in a set of vanishing size. These results allow us to decompose the code of the data into two parts, the first having all the useful information in the data that can be extracted with the family in question and the rest which has none, and we obtain a measure for the (useful) information in data.

Journal ArticleDOI
TL;DR: In this article, a new stationary random random field m(.) is introduced, which generalizes finite-differenced Brownian motion to a vector field and whose realizations could represent a broad class of possible forms for µ(.).
Abstract: This paper proposes a new framework for determining whether a given relationship is nonlinear, what the nonlinearity looks like, and whether it is adequately described by a particular parametric model. The paper studies a regression or forecasting model of the form yt = µ(xt) + et where the functional form of µ(.) is unknown. We propose viewing µ(.) itself as the outcome of a random process. The paper introduces a new stationary random random field m(.) that generalizes finite-differenced Brownian motion to a vector field and whose realizations could represent a broad class of possible forms for µ(.). We view the parameters that characterize the relation between a given realization of m(.) and the particular value of µ(.) for a given sample as population parameters to be estimated by maximum likelihood or Bayesian methods. We show that the resulting inference about the functional relation also yields consistent estimates for a broad class of deterministic functions µ(.). The paper further develops a new test of the null hypothesis of linearity based on the Lagrange multiplier principle and small-sample confidence intervals based on numerical Bayesian methods. An empirical application suggests that properly accounting for the nonlinearity of the inflation-unemployment tradeoff may explain the previously reported uneven empirical success of the Phillips Curve.

Journal ArticleDOI
TL;DR: In this article, the first-order Markov chain method was used for estimating the wind speed in the northwestern region of Turkey and the transition matrix approach was used to generate the synthetic wind speed time series.

Journal ArticleDOI
TL;DR: A robust stabilizing controller that can be constructed easily by solving convex problems in terms of linear matrix inequalities is presented and is less conservative and easier to obtain than the existing controllers.

Journal ArticleDOI
TL;DR: In this paper, the adaptive Neyman test is used to check the bias vector of residuals from parametric fits against large nonparametric alternatives, and the power of the proposed tests is comparable to the F-test statistic even in situations where the F test is known to be suitable and can be far more powerful than the F -test statistic in other situations.
Abstract: Several new tests are proposed for examining the adequacy of a family of parametric models against large nonparametric alternatives. These tests formally check if the bias vector of residuals from parametric fits is negligible by using the adaptive Neyman test and other methods. The testing procedures formalize the traditional model diagnostic tools based on residual plots. We examine the rates of contiguous alternatives that can be detected consistently by the adaptive Neyman test. Applications of the procedures to the partially linear models are thoroughly discussed. Our simulation studies show that the new testing procedures are indeed powerful and omnibus. The power of the proposed tests is comparable to the F-test statistic even in the situations where the F test is known to be suitable and can be far more powerful than the F-test statistic in other situations. An application to testing linear models versus additive models is also discussed.