scispace - formally typeset
Search or ask a question

Showing papers on "Parametric model published in 2007"


Journal ArticleDOI
TL;DR: A unified approach is proposed that makes it possible for researchers to preprocess data with matching and then to apply the best parametric techniques they would have used anyway and this procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.
Abstract: Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.

3,601 citations


Journal ArticleDOI
TL;DR: It is demonstrated how adaptive switching supervisory control can be combined with a nonlinear Lyapunov-based tracking control law to solve the problem of global boundedness and convergence of the position tracking error to a neighborhood of the origin that can be made arbitrarily small.
Abstract: We address the problem of position trajectory-tracking and path-following control design for underactuated autonomous vehicles in the presence of possibly large modeling parametric uncertainty. For a general class of vehicles moving in either 2- or 3-D space, we demonstrate how adaptive switching supervisory control can be combined with a nonlinear Lyapunov-based tracking control law to solve the problem of global boundedness and convergence of the position tracking error to a neighborhood of the origin that can be made arbitrarily small. The desired trajectory does not need to be of a particular type (e.g., trimming trajectories) and can be any sufficiently smooth bounded curve parameterized by time. We also show how these results can be applied to solve the path-following problem, in which the vehicle is required to converge to and follow a path, without a specific temporal specification. We illustrate our design procedures through two vehicle control applications: a hovercraft (moving on a planar surface) and an underwater vehicle (moving in 3-D space). Simulations results are presented and discussed.

848 citations


Posted Content
Xiaohong Chen1
TL;DR: The method of sieves as discussed by the authors can be used to estimate semi-nonparametric econometric models with various constraints, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity.
Abstract: Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other complications such as introducing infinite-dimensional parameter spaces that may not be compact and the optimization problem may no longer be well-posed. The method of sieves provides one way to tackle such difficulties by optimizing an empirical criterion over a sequence of approximating parameter spaces (i.e., sieves); the sieves are less complex but are dense in the original space and the resulting optimization problem becomes well-posed. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated semi-nonparametric models with (or without) endogeneity and latent heterogeneity. It can easily incorporate prior information and constraints, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. It can simultaneously estimate the parametric and nonparametric parts in semi-nonparametric models, typically with optimal convergence rates for both parts. This chapter describes estimation of semi-nonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve M-estimates, pointwise normality of series estimates of regression functions, root-n asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite-dimensional parameters. Examples are used to illustrate the general results.

654 citations


Book
28 May 2007
TL;DR: In this paper, the authors present an approach for the estimation of spectra and frequency response functions based on output-error parametric model estimation and subspace model identification with random variables and signals.
Abstract: Preface 1. Introduction 2. Linear algebra 3. Discrete-time signals and systems 4. Random variables and signals 5. Kalman filtering 6. Estimation of spectra and frequency response functions 7. Output-error parametric model estimation 8. Prediction-error parametric model estimation 9. Subspace model identification 10. The system identification cycle Notation and symbols List of abbreviations References Index.

643 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a test for comparing the out-of-sample accuracy of competing density forecasts of a variable, which is valid under general conditions: the data can be heterogeneous and the forecasts can be based on (nested or non-nested) parametric models or produced by semiparametric, nonparametric, or Bayesian estimation techniques.
Abstract: We propose a test for comparing the out-of-sample accuracy of competing density forecasts of a variable. The test is valid under general conditions: The data can be heterogeneous and the forecasts can be based on (nested or nonnested) parametric models or produced by semiparametric, nonparametric, or Bayesian estimation techniques. The evaluation is based on scoring rules, which are loss functions defined over the density forecast and the realizations of the variable. We restrict attention to the logarithmic scoring rule and propose an out-of-sample “weighted likelihood ratio” test that compares weighted averages of the scores for the competing forecasts. The user-defined weights are a way to focus attention on different regions of the distribution of the variable. For a uniform weight function, the test can be interpreted as an extension of Vuong's likelihood ratio test to time series data and to an out-of-sample testing framework. We apply the tests to evaluate density forecasts of U.S. inflation produc...

535 citations


Posted Content
TL;DR: The method of sieves as mentioned in this paper can be used to estimate semi-nonparametric econometric models with various constraints, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity.
Abstract: Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other complications such as introducing infinite-dimensional parameter spaces that may not be compact and the optimization problem may no longer be well-posed. The method of sieves provides one way to tackle such difficulties by optimizing an empirical criterion over a sequence of approximating parameter spaces (i.e., sieves); the sieves are less complex but are dense in the original space and the resulting optimization problem becomes well-posed. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated semi-nonparametric models with (or without) endogeneity and latent heterogeneity. It can easily incorporate prior information and constraints, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. It can simultaneously estimate the parametric and nonparametric parts in semi-nonparametric models, typically with optimal convergence rates for both parts. This chapter describes estimation of semi-nonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve M-estimates, pointwise normality of series estimates of regression functions, root-n asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite-dimensional parameters. Examples are used to illustrate the general results.

419 citations


Book
02 Feb 2007
TL;DR: This paper presents nonparametric Descriptive Methods to Check Parametric Assumptions of Parametric Models of Time-Dependence, and three models of Exponential Transition Rate Models, which are examples of time-dependence-based exponential models.
Abstract: Contents: Preface. Introduction. Event History Data Structures. Nonparametric Descriptive Methods. Exponential Transition Rate Models. Piecewise Constant Exponential Models. Exponential Models With Time-Dependent Covariates. Parametric Models of Time-Dependence. Methods to Check Parametric Assumptions. Semiparametric Transition Rate Models. Problems of Model Specification.

381 citations


Journal ArticleDOI
TL;DR: A taxonomy of the hazard functions of the GG family, which includes various special distributions and allows depiction of effects of exposures on hazard functions is presented, which was applied to study survival after a diagnosis of clinical AIDS during different eras of HIV therapy.
Abstract: The widely used Cox proportional hazards regression model for the analysis of censored survival data has limited utility when either hazard functions themselves are of primary interest, or when relative times instead of relative hazards are the relevant measures of association. Parametric regression models are an attractive option in situations such as this, although the choice of a particular model from the available families of distributions can be problematic. The generalized gamma (GG) distribution is an extensive family that contains nearly all of the most commonly used distributions, including the exponential, Weibull, log normal and gamma. More importantly, the GG family includes all four of the most common types of hazard function: monotonically increasing and decreasing, as well as bathtub and arc-shaped hazards. We present here a taxonomy of the hazard functions of the GG family, which includes various special distributions and allows depiction of effects of exposures on hazard functions. We applied the proposed taxonomy to study survival after a diagnosis of clinical AIDS during different eras of HIV therapy, where proportionality of hazard functions was clearly not fulfilled and flexibility in estimating hazards with very different shapes was needed. Comparisons of survival after AIDS in different eras of therapy are presented in terms of both relative times and relative hazards. Standard errors for these and other derived quantities are computed using the delta method and checked using the bootstrap. Description of standard statistical software (Stata, SAS and S-Plus) for the computations is included and available at http://statepi.jhsph.edu/software.

305 citations


Book ChapterDOI
Xiaohong Chen1
TL;DR: The method of sieves as mentioned in this paper can be used to estimate semi-nonparametric econometric models with various constraints, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity.
Abstract: Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other complications such as introducing infinite-dimensional parameter spaces that may not be compact and the optimization problem may no longer be well-posed. The method of sieves provides one way to tackle such difficulties by optimizing an empirical criterion over a sequence of approximating parameter spaces (i.e., sieves); the sieves are less complex but are dense in the original space and the resulting optimization problem becomes well-posed. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated semi-nonparametric models with (or without) endogeneity and latent heterogeneity. It can easily incorporate prior information and constraints, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. It can simultaneously estimate the parametric and nonparametric parts in semi-nonparametric models, typically with optimal convergence rates for both parts. This chapter describes estimation of semi-nonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve M-estimates, pointwise normality of series estimates of regression functions, root-n asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite-dimensional parameters. Examples are used to illustrate the general results.

299 citations


Book
16 Jul 2007
TL;DR: The aim of this book is to provide a Discussion of Parametric Models of Observations and their Applications to Numerical Methods for Parameter Estimation, and some of the methods used in this book addressed these problems.
Abstract: Preface. 1 Introduction. 2 Parametric Models of Observations. 3 Distributions of Observations. 4 Precision and Accuracy. 5 Precise and Accurate Estimation. 6 Numerical Methods for Parameter Estimation. 7 Solutions or Partial Solutions to Problems. Appendix A: Statistical Results. Appendix B: Vectors and Matrices. Appendix C: Positive Semidefinite and Positive Definite Matrices. Appendix D: Vector and Matrix Differentiation. References. Topic Index.

262 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric stochastic frontier (SF) model is proposed based on local maximum likelihood (LML) for estimating the efficiency of a production process.

Journal ArticleDOI
TL;DR: An approximate spatial correlation model for clustered multiple-input multiple-output (MIMO) channels is proposed and used to show that the proposed model is a good fit to the existing parametric models for low angle spreads (i.e., smaller than 10deg).
Abstract: An approximate spatial correlation model for clustered multiple-input multiple-output (MIMO) channels is proposed in this paper. The two ingredients for the model are an approximation for uniform linear and circular arrays to avoid numerical integrals and a closed-form expression for the correlation coefficients that is derived for the Laplacian azimuth angle distribution. A new performance metric to compare parametric and nonparametric channel models is proposed and used to show that the proposed model is a good fit to the existing parametric models for low angle spreads (i.e., smaller than 10deg). A computational-complexity analysis shows that the proposed method is a numerically efficient way of generating the spatially correlated MIMO channels.

Journal ArticleDOI
TL;DR: A Bayesian non‐parametric approach is taken and adopt a hierarchical model with a suitable non-parametric prior obtained from a generalized gamma process to solve the problem of determining the number of components in a mixture model.
Abstract: Summary. The paper deals with the problem of determining the number of components in a mixture model. We take a Bayesian non-parametric approach and adopt a hierarchical model with a suitable non-parametric prior for the latent structure. A commonly used model for such a problem is the mixture of Dirichlet process model. Here, we replace the Dirichlet process with a more general non-parametric prior obtained from a generalized gamma process. The basic feature of this model is that it yields a partition structure for the latent variables which is of Gibbs type. This relates to the well-known (exchangeable) product partition models. If compared with the usual mixture of Dirichlet process model the advantage of the generalization that we are examining relies on the availability of an additional parameter σ belonging to the interval (0,1): it is shown that such a parameter greatly influences the clustering behaviour of the model. A value of σ that is close to 1 generates a large number of clusters, most of which are of small size. Then, a reinforcement mechanism which is driven by σ acts on the mass allocation by penalizing clusters of small size and favouring those few groups containing a large number of elements. These features turn out to be very useful in the context of mixture modelling. Since it is difficult to specify a priori the reinforcement rate, it is reasonable to specify a prior for σ. Hence, the strength of the reinforcement mechanism is controlled by the data.

01 Jan 2007
TL;DR: This paper presents an approach to finding the conditional distribution p(c|x) using a parametric model, and then to determine the parameters using a training set consisting of pairs of input vectors along with their corresponding target output vectors.
Abstract: For many applications of machine learning the goal is to predict the value of a vector c given the value of a vector x of input features. In a classification problem c represents a discrete class label, whereas in a regression problem it corresponds to one or more continuous variables. From a probabilistic perspective, the goal is to find the conditional distribution p(c|x). The most common approach to this problem is to represent the conditional distribution using a parametric model, and then to determine the parameters using a training set consisting of pairs {xn, cn} of input vectors along with their corresponding target output vectors. The resulting conditional distribution can be used to make predictions of c for new values of x. This is known as a discriminative approach, since the conditional distribution discriminates directly between the different values of c.

Journal ArticleDOI
Abstract: The paper examines various tests for assessing whether a time series model requires a slope component. We first consider the simple t-test on the mean of first differences and show that it achieves high power against the alternative hypothesis of a stochastic nonstationary slope as well as against a purely deterministic slope. The test may be modified, parametrically or nonparametrically to deal with serial correlation. Using both local limiting power arguments and finite sample Monte Carlo results, we compare the t-test with the nonparametric tests of Vogelsang (1998) and with a modified stationarity test. Overall the t-test seems a good choice, particularly if it is implemented by fitting a parametric model to the data. When standardized by the square root of the sample size, the simple t-statistic, with no correction for serial correlation, has a limiting distribution if the slope is stochastic. We investigate whether it is a viable test for the null hypothesis of a stochastic slope and conclude that its value may be limited by an inability to reject a small deterministic slope. Empirical illustrations are provided using series of relative prices in the euro-area and data on global temperature.

Journal ArticleDOI
TL;DR: An exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework, and outlines a future generalization to multiresolution observations.
Abstract: We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are 1) conditional sampling of foreground spectral parameters, and 2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground-CMB posterior distribution, and therefore all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multi-resolution observations. To verify the method, we analyse simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3-yr WMAP data, downgraded to a common resolution of 3 degree FWHM. The results from the actual 3-yr WMAP temperature analysis are presented in a companion Letter.

Journal ArticleDOI
TL;DR: TIMP is an R package for modeling multiway spectroscopic measurements that fits separable nonlinear models using partitioned variable projection, a variant of the variable projection algorithm that is described here for the first time.
Abstract: TIMP is an R package for modeling multiway spectroscopic measurements. The package allows for the simultaneous analysis of datasets collected under different experimental conditions in terms of a wide variety of parametric models. Models arising in spectroscopy data analysis often have some parameters that are intrinstically nonlinear, and some parameters that are conditionally linear on estimates of the nonlinear parameters. TIMP fits such separable nonlinear models using partitioned variable projection, a variant of the variable projection algorithm that is described here for the first time. The of the partitioned variable projection algorithm allows fitting many models for spectroscopy datasets using much less memory as compared to under the standard variable projection algorithm that is implemented in nonlinear optimization routines (e.g., the plinear option of the R function nls), as is shown here. An overview of modeling with TIMP is also given that includes several case studies in the application of the package.

Journal ArticleDOI
TL;DR: This paper presents a new formulation for recovering the fiber tract geometry within a voxel from diffusion weighted magnetic resonance imaging (MRI) data, in the presence of single or multiple neuronal fibers, and defines a discrete set of diffusion basis functions.
Abstract: In this paper, we present a new formulation for recovering the fiber tract geometry within a voxel from diffusion weighted magnetic resonance imaging (MRI) data, in the presence of single or multiple neuronal fibers. To this end, we define a discrete set of diffusion basis functions. The intravoxel information is recovered at voxels containing fiber crossings or bifurcations via the use of a linear combination of the above mentioned basis functions. Then, the parametric representation of the intravoxel fiber geometry is a discrete mixture of Gaussians. Our synthetic experiments depict several advantages by using this discrete schema: the approach uses a small number of diffusion weighted images (23) and relatively small b values (1250 s/mm2 ), i.e., the intravoxel information can be inferred at a fraction of the acquisition time required for datasets involving a large number of diffusion gradient orientations. Moreover our method is robust in the presence of more than two fibers within a voxel, improving the state-of-the-art of such parametric models. We present two algorithmic solutions to our formulation: by solving a linear program or by minimizing a quadratic cost function (both with non-negativity constraints). Such minimizations are efficiently achieved with standard iterative deterministic algorithms. Finally, we present results of applying the algorithms to synthetic as well as real data.

Journal ArticleDOI
TL;DR: The results demonstrate that the likelihood-based parametric analyses for the cumulative incidence function are a practically useful alternative to the semiparametric analyses.
Abstract: We propose parametric regression analysis of cumulative incidence function with competing risks data. A simple form of Gompertz distribution is used for the improper baseline subdistribution of the event of interest. Maximum likelihood inferences on regression parameters and associated cumulative incidence function are developed for parametric models, including a flexible generalized odds rate model. Estimation of the long-term proportion of patients with cause-specific events is straightforward in the parametric setting. Simple goodness-of-fit tests are discussed for evaluating a fixed odds rate assumption. The parametric regression methods are compared with an existing semiparametric regression analysis on a breast cancer data set where the cumulative incidence of recurrence is of interest. The results demonstrate that the likelihood-based parametric analyses for the cumulative incidence function are a practically useful alternative to the semiparametric analyses.

Proceedings ArticleDOI
10 Dec 2007
TL;DR: The GP-UKF algorithm has a number of advantages over standard (parametric) UKFs, including the ability to estimate the state of arbitrary nonlinear systems, improved tracking quality compared to a parametric UKF, and graceful degradation with increased model uncertainty.
Abstract: This paper considers the use of non-parametric system models for sequential state estimation. In particular, motion and observation models are learned from training examples using Gaussian process (GP) regression. The state estimator is an unscented Kalman filter (UKF). The resulting GP-UKF algorithm has a number of advantages over standard (parametric) UKFs. These include the ability to estimate the state of arbitrary nonlinear systems, improved tracking quality compared to a parametric UKF, and graceful degradation with increased model uncertainty. These advantages stem from the fact that GPs consider both the noise in the system and the uncertainty in the model. If an approximate parametric model is available, it can be incorporated into the GP; resulting in further performance improvements. In experiments, we show how the GP-UKF algorithm can be applied to the problem of tracking an autonomous micro-blimp.

Journal ArticleDOI
TL;DR: This paper makes a complete survey of stochastic models for sea state and wind time series, beginning with methods based on Gaussian processes, then non-parametric resampling methods for time series are introduced followed by various parametric models.

Patent
31 Jul 2007
TL;DR: In this article, a method and apparatus for adaptive geometric partitioning for video encoding and decoding is described, which includes an encoder (900) for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model.
Abstract: There are provided methods and apparatus for adaptive geometric partitioning for video encoding and decoding. An apparatus includes an encoder (900) for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric parametric marginal structural model (NPMSM) approach is proposed for causal inference, which does not require correct specification of a parametric model but instead relies on a working model which can be willingly misspecified.

Journal ArticleDOI
TL;DR: A new class of models for making inference about the mean of a vector of repeated outcomes when the outcome vector is incompletely observed in some study units and missingness is nonmonotone is proposed, which are ideal for conducting sensitivity analyses aimed at evaluating the impact that different degrees of departure from sequential explainability have on inference aboutThe marginal means of interest.
Abstract: We propose a new class of models for making inference about the mean of a vector of repeated outcomes when the outcome vector is incompletely observed in some study units and missingness is nonmonotone. Each model in our class is indexed by a set of unidentified selection-bias functions which quantify the residual association of the outcome at each occasion t and the probability that this outcome is missing after adjusting for variables observed prior to time t and for the past nonresponse pattern. In particular, selection-bias functions equal to zero encode the investigator's a priori belief that nonresponse of the next outcome does not depend on that outcome after adjusting for the observed past. We call this assumption sequential explainability. Since each model in our class is nonparametric, it fits the data perfectly well. As such, our models are ideal for conducting sensitivity analyses aimed at evaluating the impact that different degrees of departure from sequential explainability have on inference about the marginal means of interest. Although the marginal means are identified under each of our models, their estimation is not feasible in practice because it requires the auxiliary estimation of conditional expectations and probabilities given high-dimensional variables. We henceforth discuss the estimation of the marginal means under each model in our class assuming, additionally, that at each occasion either one of the following two models holds: a parametric model for the conditional probability of nonresponse given current outcomes and past recorded data or a parametric model for the conditional mean of the outcome on the nonrespondents given the past recorded data. We call the resulting procedure 2 T -multiply robust as it protects at each of the T time points against misspecification of one of these two working models, although not against simultaneous misspecification of both. We extend our proposed class of models and estimators to incorporate data configurations which include baseline covariates and a parametric model for the conditional mean of the vector of repeated outcomes given the baseline covariates.

Journal ArticleDOI
TL;DR: A generalization of the EM algorithm to semiparametric mixture models is proposed, and the behavior of the proposed EM type estimators is studied numerically not only through several Monte-Carlo experiments but also through comparison with alternative methods existing in the literature.

Journal Article
TL;DR: Although it seems that there may not be a single model that is substantially better than others, in univariate analysis the data strongly supported the log normal regression among parametric models and it can be lead to more precise results as an alternative to Cox.
Abstract: Background Researchers in medical sciences often tend to prefer Cox semi-parametric instead of parametric models for survival analysis because of fewer assumptions but under certain circumstances, parametric models give more precise estimates. The objective of this study was to compare two survival regression methods - Cox regression and parametric models - in patients with gastric adenocarcinomas who registered at Taleghani hospital, Tehran. Methods We retrospectively studied 746 cases from February 2003 through January 2007. Gender, age at diagnosis, family history of cancer, tumor size and pathologic distant of metastasis were selected as potential prognostic factors and entered into the parametric and semi parametric models. Weibull, exponential and lognormal regression were performed as parametric models with the Akaike Information Criterion (AIC) and standardized of parameter estimates to compare the efficiency of models. Results The survival results from both Cox and Parametric models showed that patients who were older than 45 years at diagnosis had an increased risk for death, followed by greater tumor size and presence of pathologic distant metastasis. Conclusion In multivariate analysis Cox and Exponential are similar. Although it seems that there may not be a single model that is substantially better than others, in univariate analysis the data strongly supported the log normal regression among parametric models and it can be lead to more precise results as an alternative to Cox.

Journal ArticleDOI
TL;DR: In this article, a stochastic process based on the difference between the empirical processes that are obtained from the standardized non-parametric residuals under the null hypothesis (of a specific parametric form of the variance function) and the alternative is introduced and its weak convergence established.
Abstract: In the common non-parametric regression model the problem of testing for the parametric form of the conditional variance is considered. A stochastic process based on the difference between the empirical processes that are obtained from the standardized non-parametric residuals under the null hypothesis (of a specific parametric form of the variance function) and the alternative is introduced and its weak convergence established. This result is used for the construction of a Kolmogorov-Smimov and a Cramer-von Mises type of statistic for testing the parametric form of the conditional variance. The consistency of a bootstrap approximation is established, and the finite sample properties of this approximation are investigated by means of a simulation study. In particular the new procedure is compared with some of the currently available methods for this problem.

Proceedings ArticleDOI
26 Aug 2007
TL;DR: The proposed algorithm operates (virtually) in a reproducing kernel Hilbert space through the use of an arbitrary kernel mapping and provides a computationally efficient an versatile tool for segmenting complex signals whose structure is not appropriately captured by standard parametric models.
Abstract: This contribution proposes an extension of the classic dynamic programming algorithm for detecting jumps in noisily observed piecewise-constant signals. The proposed algorithm operates (virtually) in a reproducing kernel Hilbert space through the use of an arbitrary kernel mapping. The resulting approach provides a computationally efficient an versatile tool for segmenting complex signals whose structure is not appropriately captured by standard parametric models.

Journal ArticleDOI
TL;DR: Compared with the recently introduced parametric adaptive matched filter (PAMF) and parametric Rao detectors, the parametric GLRT achieves higher data efficiency, offering improved detection performance in general.
Abstract: This paper considers the problem of detecting a multichannel signal in the presence of spatially and temporally colored disturbance. A parametric generalized likelihood ratio test (GLRT) is developed by modeling the disturbance as a multichannel autoregressive (AR) process. Maximum likelihood (ML) parameter estimation underlying the parametric GLRT is examined. It is shown that the ML estimator for the alternative hypothesis is nonlinear and there exists no closed-form expression. To address this issue, an asymptotic ML (AML) estimator is presented, which yields asymptotically optimum parameter estimates at reduced complexity. The performance of the parametric GLRT is studied by considering challenging cases with limited or no training signals for parameter estimation. Such cases (especially when training is unavailable) are of great interest in detecting signals in heterogeneous, fast changing, or dense-target environments, but generally cannot be handled by most existing multichannel detectors which rely more heavily on training at an adequate level. Compared with the recently introduced parametric adaptive matched filter (PAMF) and parametric Rao detectors, the parametric GLRT achieves higher data efficiency, offering improved detection performance in general.

Journal ArticleDOI
TL;DR: The aim of this paper is to propose a SAS macro to estimate parametric and semiparametric mixture cure models with covariates and an example in the field of cancer clinical trials is shown.