scispace - formally typeset
Search or ask a question

Showing papers on "Semiparametric model published in 2020"


Journal ArticleDOI
TL;DR: A semi-parametric approach is developed to relax the parametric assumption implicit in BSL to an extent and maintain the computational advantages of BSL without any additional tuning and can be significantly more robust than BSL and another approach in the literature.
Abstract: Bayesian synthetic likelihood (BSL) is now a well-established method for performing approximate Bayesian parameter estimation for simulation-based models that do not possess a tractable likelihood function. BSL approximates an intractable likelihood function of a carefully chosen summary statistic at a parameter value with a multivariate normal distribution. The mean and covariance matrix of this normal distribution are estimated from independent simulations of the model. Due to the parametric assumption implicit in BSL, it can be preferred to its nonparametric competitor, approximate Bayesian computation, in certain applications where a high-dimensional summary statistic is of interest. However, despite several successful applications of BSL, its widespread use in scientific fields may be hindered by the strong normality assumption. In this paper, we develop a semi-parametric approach to relax this assumption to an extent and maintain the computational advantages of BSL without any additional tuning. We test our new method, semiBSL, on several challenging examples involving simulated and real data and demonstrate that semiBSL can be significantly more robust than BSL and another approach in the literature.

41 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an approach to estimate a mixture cure model when covariates are present and the lifetime is subject to random right censoring, which is based on an inversion which allows them to write the survival function as a function of the distribution of the observable variables.
Abstract: In survival analysis it often happens that some subjects under study do not experience the event of interest; they are considered to be “cured.” The population is thus a mixture of two subpopulations, one of cured subjects and one of “susceptible” subjects. We propose a novel approach to estimate a mixture cure model when covariates are present and the lifetime is subject to random right censoring. We work with a parametric model for the cure proportion, while the conditional survival function of the uncured subjects is unspecified. The approach is based on an inversion which allows us to write the survival function as a function of the distribution of the observable variables. This leads to a very general class of models which allows a flexible and rich modeling of the conditional survival function. We show the identifiability of the proposed model as well as the consistency and the asymptotic normality of the model parameters. We also consider in more detail the case where kernel estimators are used for the nonparametric part of the model. The new estimators are compared with the estimators from a Cox mixture cure model via simulations. Finally, we apply the new model on a medical data set.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extended the joint Value-at-Risk (VaR) and expected shortfall (ES) quantile regression model of Taylor (2019), by incorporating a realized measure to drive the tail risk dynamics, as a potentially more efficient driver than daily returns.

22 citations


Book ChapterDOI
26 May 2020
TL;DR: In this paper, a hedonic price function built through a semiparametric additive model was applied for the real estate market analysis of the central area of Reggio Calabria.
Abstract: In this paper a hedonic price function built through a semiparametric additive model was applied for the real estate market analysis of the central area of Reggio Calabria. Based on Penalized Spline functions, the semiparametric model aimed to detect and identified the existence of a market premium arising from the choice of sustainable interventions, in terms of higher real estate values.

16 citations


Journal ArticleDOI
TL;DR: In this paper, a semiparametric model averaging prediction (SMAP) method for a dichotomous response is proposed to approximate the unknown score function by a linear combination of one-dimensional marginal score functions.

16 citations


Journal ArticleDOI
TL;DR: This article studies variation in model average techniques in parametric models and continuous responses for model-based prediction in discrete-time models.
Abstract: Model average techniques are very useful for model-based prediction. However, most earlier works in this field focused on parametric models and continuous responses. In this article, we study varyi...

13 citations


Journal ArticleDOI
TL;DR: In this article, a new model framework called Realized Conditional Autoregressive Expectile is proposed, whereby a measurement equation is added to the conventional conditional autoregressive expectation model, capturing the contemporaneous dependence between it and the latent conditional expectile; it also drives the expectile dynamics.
Abstract: A new model framework called Realized Conditional Autoregressive Expectile is proposed, whereby a measurement equation is added to the conventional Conditional Autoregressive Expectile model. A realized measure acts as the dependent variable in the measurement equation, capturing the contemporaneous dependence between it and the latent conditional expectile; it also drives the expectile dynamics. The usual grid search and asymmetric least squares optimization, to estimate the expectile level and parameters, suffers from convergence issues leading to inefficient estimation. This article develops an alternative random walk Metropolis stochastic target search method, incorporating an adaptive Markov Chain Monte Carlo sampler, which leads to improved accuracy in estimation of the expectile level and model parameters. The sampling properties of this method are assessed via a simulation study. In a forecast study applied to several market indices and asset return series, one-day-ahead Value-at-Risk and Expected Shortfall forecasting results favor the proposed model class.

13 citations


Journal ArticleDOI
TL;DR: Monte Carlo simulations and an empirical analysis of regional unemployment in Italy show that the proposed semiparametric P-Spline model represents a valid alternative to parametric methods aimed at disentangling strong and weak cross-sectional dependence when both spatial and temporal heterogeneity are smoothly distributed.
Abstract: We propose a semiparametric P-Spline model to deal with spatial panel data. This model includes a non-parametric spatio-temporal trend, a spatial lag of the dependent variable, and a time series autoregressive noise. Specifically, we consider a spatio-temporal ANOVA model, disaggregating the trend into spatial and temporal main effects, as well as second- and third-order interactions between them. Algorithms based on spatial anisotropic penalties are used to estimate all the parameters in a closed form without the need for multidimensional optimization. Monte Carlo simulations and an empirical analysis of regional unemployment in Italy show that our model represents a valid alternative to parametric methods aimed at disentangling strong and weak cross-sectional dependence when both spatial and temporal heterogeneity are smoothly distributed.

12 citations


Journal ArticleDOI
TL;DR: A semiparametric Bayesian approach to missing outcome data in longitudinal studies in the presence of auxiliary covariates is developed, motivated by data from a clinical trial on treatments for schizophrenia.
Abstract: We develop a semiparametric Bayesian approach to missing outcome data in longitudinal studies in the presence of auxiliary covariates. We consider a joint model for the full data response, missingn...

12 citations


Journal ArticleDOI
10 Nov 2020
TL;DR: In this paper, the authors study the relationship between environmental abatement and real GDP and find evidence that this relationship is characterized by an increasing curve which confirms the existence of a J curve, a finding that agrees with the predictions from recent theoretical models.
Abstract: This paper is the first to study a comparatively new Environmental Kuznets Curve which traces empirically the relationship between environmental abatement and real GDP. Our model is a partial linear semi parametric model that allows for two way fixed effects to eliminate the bias arising from two sources. We use data for recycling and real GDP, for fifty states of the United States for the years between 1988 and 2017. We find evidence that this relationship is characterized by an increasing curve which confirms the existence of a J curve, a finding that agrees with the predictions from recent theoretical models.

12 citations


Posted Content
TL;DR: A novel semiparametric hazards regression model by modeling the hazard function as a product of a parametric baseline hazard function and a nonparametric component that uses SBART to incorporate clustering, unknown functional forms of the main effects, and interaction effects of various covariates.
Abstract: Popular parametric and semiparametric hazards regression models for clustered survival data are inappropriate and inadequate when the unknown effects of different covariates and clustering are complex. This calls for a flexible modeling framework to yield efficient survival prediction. Moreover, for some survival studies involving time to occurrence of some asymptomatic events, survival times are typically interval censored between consecutive clinical inspections. In this article, we propose a robust semiparametric model for clustered interval-censored survival data under a paradigm of Bayesian ensemble learning, called Soft Bayesian Additive Regression Trees or SBART (Linero and Yang, 2018), which combines multiple sparse (soft) decision trees to attain excellent predictive accuracy. We develop a novel semiparametric hazards regression model by modeling the hazard function as a product of a parametric baseline hazard function and a nonparametric component that uses SBART to incorporate clustering, unknown functional forms of the main effects, and interaction effects of various covariates. In addition to being applicable for left-censored, right-censored, and interval-censored survival data, our methodology is implemented using a data augmentation scheme which allows for existing Bayesian backfitting algorithms to be used. We illustrate the practical implementation and advantages of our method via simulation studies and an analysis of a prostate cancer surgery study where dependence on the experience and skill level of the physicians leads to clustering of survival times. We conclude by discussing our method's applicability in studies involving high dimensional data with complex underlying associations.

Journal ArticleDOI
01 Aug 2020
TL;DR: The application of semiparametric model and various parametric (Weibull, exponential, log-normal, and log-logistic) models in lung cancer data by using R software is illustrated by using freely available R software with illustration.
Abstract: Background Cox regression is the most widely used survival model in oncology. Parametric survival models are an alternative of Cox regression model. In this study, we have illustrated the application of semiparametric model and various parametric (Weibull, exponential, log-normal, and log-logistic) models in lung cancer data by using R software. Aims The aim of the study is to illustrate responsible factors in lung cancer and compared with Cox regression and parametric models. Methods A total of 66 lung cancer patients of African Americans (AAs) (data available online at http://clincancerres.aacrjournals.org) was used. To identify predictors of overall survival, stage of patient, sex, age, smoking, and tumor grade were taken into account. Both parametric and semiparametric models were fitted. Performance of parametric models was compared by Akaike information criterion (AIC). "Survival" package in R software was used to perform the analysis. Posterior density was obtained for different parameters through Bayesian approach using WinBUGS. Results The illustration about model fitting problem was documented. Parametric models were fitted only for stage after controlling for age. AIC value was minimum (462.4087) for log-logistic model as compared with other parametric models. Log-logistic model was the best fit for AAs lung cancer data under study. Conclusion Exploring parametric survival models in daily practice of cancer research is challenging. It may be due to many reasons including popularity of Cox regression and lack of knowledge about how to perform it. This paper provides the application of parametric survival models by using freely available R software with illustration. It is expected that this present work can be useful to apply parametric survival models.

Posted Content
TL;DR: This work considers the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust, and shows that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters.
Abstract: Recent work on policy learning from observational data has highlighted the importance of efficient policy evaluation and has proposed reductions to weighted (cost-sensitive) classification. But, efficient policy evaluation need not yield efficient estimation of policy parameters. We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust. We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters. We draw a contrast to actual (possibly weighted) binary classification, where correct specification implies a parametric model, while for policy learning it only implies a semiparametric model. In light of this, we instead propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters. We propose a particular method based on recent developments on solving moment problems using neural networks and demonstrate the efficiency and regret benefits of this method empirically.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a semiparametric covariance/scatter matrix estimator for real-valued elliptical data by exploiting the Le Cam's theory of one-step efficient estimators and the rank-based statistics.
Abstract: Covariance matrices play a major role in statistics, signal processing and machine learning applications. This paper focuses on the semiparametric covariance/scatter matrix estimation problem in elliptical distributions. The class of elliptical distributions can be seen as a semiparametric model where the finite-dimensional vector of interest is given by the location vector and by the (vectorized) covariance/scatter matrix, while the density generator represents an infinite-dimensional nuisance function. The main aim of this work is then to provide possible estimators of the finite-dimensional parameter vector able to reconcile the two dichotomic concepts of robustness and (semiparametric) efficiency . An $R$ -estimator satisfying these requirements has been recently proposed by Hallin, Oja and Paindaveine for real-valued elliptical data by exploiting the Le Cam's theory of one-step efficient estimators and the rank-based statistics . In this paper, we firstly recall the building blocks underlying the derivation of such real-valued $R$ -estimator, then its extension to complex-valued data is proposed. Moreover, through numerical simulations, its estimation performance and robustness to outliers are investigated in a finite-sample regime.

Journal ArticleDOI
01 May 2020
TL;DR: A hybrid method including nonparametric kernel-based approach and the least absolute deviations is suggested which allows to estimate the parameters of the model and the fuzzy nonlinear function of the innovations, simultaneously, and indicated that the proposed method is potentially effective for predicting fuzzy time series data.
Abstract: In time series analysis, such as other statistical problems, we may confront imprecise quantity. One case is a situation in which the observations related to underlying systems are imprecise. This paper proposes a semi-parametric autoregressive model for those real-world applications whose observed data are reported by fuzzy numbers. To this end, a hybrid method including nonparametric kernel-based approach and the least absolute deviations is suggested which allows us to estimate the parameters of the model and the fuzzy nonlinear function of the innovations, simultaneously. In order to examine the performance and effectiveness of the proposed fuzzy semi-parametric time series model, some common goodness-of-fit criteria are employed. The obtained results based on a practical example of simulated fuzzy time series data indicated that the proposed method is potentially effective for predicting fuzzy time series data.

Journal ArticleDOI
TL;DR: A method to compute the penalized least squares estimators (PLSEs) of the parametric and the nonparametric components given independent and identically distributed (i.i.d.) data is developed and it is proved the consistency and the rates of convergence of the estimators.
Abstract: We consider estimation and inference in a single index regression model with an unknown but smooth link function. In contrast to the standard approach of using kernels or regression splines, we use smoothing splines to estimate the smooth link function. We develop a method to compute the penalized least squares estimators (PLSEs) of the parametric and the nonparametric components given independent and identically distributed (i.i.d.) data. We prove the consistency and find the rates of convergence of the estimators. We establish asymptotic normality under mild assumption and prove asymptotic efficiency of the parametric component under homoscedastic errors. A finite sample simulation corroborates our asymptotic theory. We also analyze a car mileage data set and a Ozone concentration data set. The identifiability and existence of the PLSEs are also investigated.

Journal ArticleDOI
TL;DR: In this article, credit-granting institutions need to estimate the probability of loan default, which represents the chance a customer fails to make repayments as promised, and this estimation is intertwined.
Abstract: Credit-granting institutions need to estimate the probability of loan default, which represents the chance a customer fails to make repayments as promised. Critically this estimation is intertwined...

Journal ArticleDOI
TL;DR: The proposed novel integrative interaction approach under a semiparametric model, in which genetic and environmental factors are included as the parametric and nonparametric components, respectively, can identify markers with important implications and performs favourably in terms of prediction accuracy, identification stability, and computation cost.
Abstract: In genomic analysis, it is significant though challenging to identify markers associated with cancer outcomes or phenotypes. Based on the biological mechanisms of cancers and the characteristics of datasets, we propose a novel integrative interaction approach under a semiparametric model, in which genetic and environmental factors are included as the parametric and nonparametric components, respectively. The goal of this approach is to identify the genetic factors and gene-gene interactions associated with cancer outcomes, while estimating the nonlinear effects of environmental factors. The proposed approach is based on the threshold gradient-directed regularisation technique. Simulation studies indicate that the proposed approach outperforms alternative methods at identifying the main effects and interactions, and has favourable estimation and prediction accuracy. We analysed non-small-cell lung carcinoma datasets from the Cancer Genome Atlas, and the results demonstrate that the proposed approach can identify markers with important implications and that it performs favourably in terms of prediction accuracy, identification stability, and computation cost.

Journal ArticleDOI
TL;DR: New estimators of the unknown model parameter and consequently the ROC curve from the Lehmann family are presented, and their properties are proved.

Journal ArticleDOI
01 Jun 2020
TL;DR: In this article, the authors proposed a new estimating function, which can incorporate the correlation structure between repeated measures to improve estimation efficiency in quantile regression, based on B-spline basis approximation for nonparametric parts.
Abstract: Efficient estimation and variable selection in partial linear varying coefficient quantile regression model with longitudinal data is concerned in this paper. To improve estimation efficiency in quantile regression, based on B-spline basis approximation for nonparametric parts, we propose a new estimating function, which can incorporate the correlation structure between repeated measures. In order to reduce computational burdens, the induced smoothing method is used. The new method is empirically shown to be much more efficient and robust than the popular generalized estimating equations based methods. Under mild conditions, the asymptotically normal distribution of the estimators for the parametric components and the optimal convergence rate of the estimators for the nonparametric functions are established. Furthermore, to do variable selection, a smooth-threshold estimating equation is proposed, which can use the correlation structure and select the nonparametric and parametric parts simultaneously. Theoretically, the variable selection procedure works beautifully, including consistency in variable selection and oracle property in estimation. Simulation studies and real data analysis are included to show the finite sample performance.

Posted Content
TL;DR: A semiparametric model using Bayesian tree ensembles for estimating the causal effect of a continuous treatment of exposure which does not require a priori parametric specification of the influence of control variables, and allows for identification of effect modification by pre-specified moderators.
Abstract: In estimating the causal effect of a continuous exposure or treatment, it is important to control for all confounding factors. However, most existing methods require parametric specification for how control variables influence the outcome or generalized propensity score, and inference on treatment effects is usually sensitive to this choice. Additionally, it is often the goal to estimate how the treatment effect varies across observed units. To address this gap, we propose a semiparametric model using Bayesian tree ensembles for estimating the causal effect of a continuous treatment of exposure which (i) does not require a priori parametric specification of the influence of control variables, and (ii) allows for identification of effect modification by pre-specified moderators. The main parametric assumption we make is that the effect of the exposure on the outcome is linear, with the steepness of this relationship determined by a nonparametric function of the moderators, and we provide heuristics to diagnose the validity of this assumption. We apply our methods to revisit a 2001 study of how abortion rates affect incidence of crime.

Proceedings Article
12 Jul 2020
TL;DR: In this article, the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust, is considered.
Abstract: Recent work on policy learning from observational data has highlighted the importance of efficient policy evaluation and has proposed reductions to weighted (cost-sensitive) classification. But, efficient policy evaluation need not yield efficient estimation of policy parameters. We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust. We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters. We draw a contrast to actual (possibly weighted) binary classification, where correct specification implies a parametric model, while for policy learning it only implies a semiparametric model. In light of this, we instead propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters. We propose a particular method based on recent developments on solving moment problems using neural networks and demonstrate the efficiency and regret benefits of this method empirically.

Journal ArticleDOI
TL;DR: A new semiparametric model for Bayesian networks which is more flexible and robust than the parametric or linear one, providing a further generalization of the Gaussian Bayesian network is extended.
Abstract: The Bayesian network is crucial for computer technology and artificial intelligence when dealing with probabilities. In this paper, we extended a new semiparametric model for Bayesian netwo...

Journal ArticleDOI
TL;DR: This paper presents the joint modeling framework that is implemented in JSM, as well as the standard error estimation methods, and illustrates the package with two real data examples: a liver cirrhosis data and a Mayo Clinic primary biliary cirrhotic data.
Abstract: This paper is devoted to the R package JSM which performs joint statistical modeling of survival and longitudinal data. In biomedical studies it has been increasingly common to collect both baseline and longitudinal covariates along with a possibly censored survival time. Instead of analyzing the survival and longitudinal outcomes separately, joint modeling approaches have attracted substantive attention in the recent literature and have been shown to correct biases from separate modeling approaches and enhance information. Most existing approaches adopt a linear mixed effects model for the longitudinal component and the Cox proportional hazards model for the survival component. We extend the Cox model to a more general class of transformation models for the survival process, where the baseline hazard function is completely unspecified leading to semiparametric survival models. We also offer a non-parametric multiplicative random effects model for the longitudinal process in JSM in addition to the linear mixed effects model. In this paper, we present the joint modeling framework that is implemented in JSM, as well as the standard error estimation methods, and illustrate the package with two real data examples: a liver cirrhosis data and a Mayo Clinic primary biliary cirrhosis data.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the identification and estimation of a fixed-effects model with an interval-censored dependent variable and proposed two versions of the model: a parametric model with logistic errors and a semiparametric model having an unspecified distribution.
Abstract: This paper considers identification and estimation of a fixed‐effects model with an interval‐censored dependent variable. In each time period, the researcher observes the interval (with known endpoints) in which the dependent variable lies but not the value of the dependent variable itself. Two versions of the model are considered: a parametric model with logistic errors and a semiparametric model with errors having an unspecified distribution. In both cases, the error disturbances can be heteroskedastic over cross‐sectional units as long as they are stationary within a cross‐sectional unit; the semiparametric model also allows for serial correlation of the error disturbances. A conditional‐logit‐type composite likelihood estimator is proposed for the logistic fixed‐effects model, and a composite maximum‐score‐type estimator is proposed for the semiparametric model. In general, the scale of the coefficient parameters is identified by these estimators, meaning that the causal effects of interest are estimated directly in cases where the latent dependent variable is of primary interest (e.g., pure data‐coding situations). Monte Carlo simulations and an empirical application to birthweight outcomes illustrate the performance of the parametric estimator.

Posted Content
TL;DR: In this paper, a semiparametric model of network formation in the presence of unobserved agent-specific heterogeneity is proposed. But the model is not parametrically specied.
Abstract: This paper analyzes a semiparametric model of network formation in the presence of unobserved agent-specific heterogeneity. The objective is to identify and estimate the preference parameters associated with homophily on observed attributes when the distributions of the unobserved factors are not parametrically specified. This paper offers two main contributions to the literature on network formation. First, it establishes a new point identification result for the vector of parameters that relies on the existence of a special regressor. The identification proof is constructive and characterizes a closed-form for the parameter of interest. Second, it introduces a simple two-step semiparametric estimator for the vector of parameters with a first-step kernel estimator. The estimator is computationally tractable and can be applied to both dense and sparse networks. Moreover, I show that the estimator is consistent and has a limiting normal distribution as the number of individuals in the network increases. Monte Carlo experiments demonstrate that the estimator performs well in finite samples and in networks with different levels of sparsity.

Journal ArticleDOI
TL;DR: This article proposes a doubly robust semiparametric estimator based on a weighted version of the Nelson-Aalen estimator and a conditional regression estimator under an assumed semiparmetric multiplicative rate model for recurrent event data.
Abstract: Many longitudinal databases record the occurrence of recurrent events over time. In this article, we propose a new method to estimate the average causal effect of a binary treatment for recurrent event data in the presence of confounders. We propose a doubly robust semiparametric estimator based on a weighted version of the Nelson-Aalen estimator and a conditional regression estimator under an assumed semiparametric multiplicative rate model for recurrent event data. We show that the proposed doubly robust estimator is consistent and asymptotically normal. In addition, a model diagnostic plot of residuals is presented to assess the adequacy of our proposed semiparametric model. We then evaluate the finite sample behavior of the proposed estimators under a number of simulation scenarios. Finally, we illustrate the proposed methodology via a database of circus artist injuries.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm achieves higher accuracy while controlling the size of the final model, and also offers high performance in terms of run time and efficiency, when processing very large datasets.
Abstract: In recent years there has been a noticeable increase in the number of available Big Data infrastructures. This fact has promoted the adaptation of traditional machine learning techniques to be capable of addressing large scale problems in distributed environments. Kernel methods like support vector machines (SVMs) suffer from scalability problems due to their nonparametric nature and the complexity of their training procedures. In this paper, we propose a new and efficient distributed implementation of a training procedure for nonlinear semiparametric (budgeted) SVMs called distributed iterative reweighted least squares (IRWLS). This algorithm uses $ {k}$ -means to select the centroids of the semiparametric model and a new distributed algorithmic implementation of the IRWLS optimization procedure to find the weights of the model. We have implemented the proposed algorithm in Apache Spark and we have benchmarked it against other state-of-the-art methods, either full SVM ( $ {p}$ -pack SVM) or budgeted (budgeted stochastic gradient descent). Experimental results show that the proposed algorithm achieves higher accuracy while controlling the size of the final model, and also offers high performance in terms of run time and efficiency, when processing very large datasets (the computation time grows linear with the number of training patterns).

Journal ArticleDOI
TL;DR: Sun et al. as mentioned in this paper extended the smoothed least squares dummy variable estimator to the case of a functional-coefficient model with two-way fixed effects whereby they allow for unobservable heterogeneity in both dimensions of the data: cross-section and time.

Journal ArticleDOI
Feng Guo1, Wei Ma1, Lei Wang1
TL;DR: In this article, the estimation of parametric copula models when the data have nonignorable nonresponse was investigated, and the propensity follows a general semiparametric model, but the distributi...
Abstract: This paper investigates the estimation of parametric copula models when the data have nonignorable nonresponse. We consider the propensity follows a general semiparametric model, but the distributi...