scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 2005"


Reference EntryDOI
15 Jul 2005
TL;DR: The Analytic Hierarchy Process (AHP) as discussed by the authors is a theory of relative measurement of intangible criteria, where a scale of priorities is derived from pairwise comparison measurements only after the elements to be measured are known.
Abstract: The Analytic Hierarchy Process (AHP) is a theory of relative measurement of intangible criteria. With this approach to relative measurement, a scale of priorities is derived from pairwise comparison measurements only after the elements to be measured are known. The ability to do pairwise comparisons is our biological heritage and we need it to cope with a world where everything is relative and constantly changing and thus, there are no fixed standards to measure things on. In traditional measurement, one has a scale that one applies to measure any element that comes along that has the property the scale is for, and the elements are measured one by one, not by comparing them with each other. In the AHP, paired comparisons are made with judgments using numerical values taken from the AHP absolute fundamental scale of 1 to 9. A scale of relative values is derived from all these paired comparisons and it also belongs to an absolute scale that is invariant under the identity transformation like the system of real numbers. The AHP is useful for making multicriteria decisions involving benefits, opportunities, costs, and risks. The ideas are developed in stages and illustrated with examples of real-life decisions. The subject is transparent and easy to understand why it is done the way it is along the lines discussed here. The AHP has a generalization to dependence and feedback; the Analytic Network Process (ANP) is not discussed here. Keywords: analytic hierarchy process; decision making; prioritization; benefits; costs; complexity

946 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A two-step algorithm is proposed for solving the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher, which is an instance of the hypergraph partitioning problem.
Abstract: We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph partitioning problem. We propose a two-step algorithm for solving this problem. In the first step we use a novel scheme to approximate the hypergraph using a weighted graph. In the second step a spectral partitioning algorithm is used to partition the vertices of this graph. The algorithm is capable of handling hyperedges of all orders including order two, thus incorporating information of all orders simultaneously. We present a theoretical analysis that relates our algorithm to an existing hypergraph partitioning algorithm and explain the reasons for its superior performance. We report the performance of our algorithm on a variety of computer vision problems and compare it to several existing hypergraph partitioning algorithms.

276 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: A framework which can incorporate arbitrary pairwise constraints between body parts, such as scale compatibility, relative position, symmetry of clothing and smooth contour connections between parts is developed.
Abstract: The goal of this work is to recover human body configurations from static images. Without assuming a priori knowledge of scale, pose or appearance, this problem is extremely challenging and demands the use of all possible sources of information. We develop a framework which can incorporate arbitrary pairwise constraints between body parts, such as scale compatibility, relative position, symmetry of clothing and smooth contour connections between parts. We detect candidate body parts from bottom-up using parallelism, and use various pairwise configuration constraints to assemble them together into body configurations. To find the most probable configuration, we solve an integer quadratic programming problem with a standard technique using linear approximations. Approximate IQP allows us to incorporate much more information than the traditional dynamic programming and remains computationally efficient. 15 hand-labeled images are used to train the low-level part detector and learn the pairwise constraints. We show test results on a variety of images.

247 citations


Journal ArticleDOI
TL;DR: A genetic model-free approach is presented, which does not assume that the underlying genetic model is known in advance but still makes use of the information available on all genotypes, and offers a unified approach that efficiently estimates the genetic effect and the underlying Genetic model.
Abstract: Background To evaluate gene-disease associations, genetic epidemiologists collect information on the disease risk in subjects with different genotypes (for a bi-allelic polymorphism: gg, Gg, GG). Meta-analyses of such studies usually reduce the problem to a single comparison, either by performing two separate pairwise comparisons or by assuming a specific underlying genetic model (recessive, co-dominant, dominant). A biological justification for the choice of the genetic model is seldom available. Methods We present a genetic model-free approach, which does not assume that the underlying genetic model is known in advance but still makes use of the information available on all genotypes. The approach uses OR(GG), the odds ratio between the homozygous genotypes, to capture the magnitude of the genetic effect, and lambda, the heterozygote log odds ratio as a proportion of the homozygote log odds ratio, to capture the genetic mode of inheritance. The analysis assumes that the same unknown genetic model, i.e. the same lambda, applies in all studies, and this is investigated graphically. The approach is illustrated using five examples of published meta-analyses. Results Analyses based on specific genetic models can produce misleading estimates of the odds ratios when an inappropriate model is assumed. The genetic model-free approach gives appropriately wider confidence intervals than genetic model-based analyses because it allows for uncertainty about the genetic model. In terms of assessment of model fit, it performs at least as well as a bivariate pairwise analysis in our examples. Conclusions The genetic model-free approach offers a unified approach that efficiently estimates the genetic effect and the underlying genetic model. A bivariate pairwise analysis should be used if the assumption of a common genetic model across studies is in doubt.

183 citations


Journal ArticleDOI
TL;DR: In this article, an item response theory (IRT) approach is proposed to construct and score multidimensional pairwise preference items using Monte Carlo simulations. But the results show that the MUPP approach to test construction and scoring provides accurate parameter recovery in both one-and two-dimensional simulations, even with relatively few (say, 15%) unidimensional pairs.
Abstract: This article proposes an item response theory (IRT) approach to constructing and scoring multidimensional pairwise preference items. Individual statements are administered and calibrated using a unidimensional single-stimulus model. Tests are created by combining multidimensional items with a small number of unidimensional pairings needed to identify the latent metric. Trait scores are then obtained using a multidimensional Bayes modal estimation procedure based on a mathematical model called MUPP, which is illustrated and tested here using Monte Carlo simulations. Simulation results show that the MUPP approach to test construction and scoring provides accurate parameter recovery in both one- and two-dimensional simulations, even with relatively few (say, 15%) unidimensional pairings. The implications of these results for constructing and scoring fake-resistant personality items are discussed.

139 citations


Posted Content
01 Jan 2005
TL;DR: In this article, a fuzzy analytic hierarchy process (FAHP) analysis is employed to accommodate the inherent uncertainty in the judgement-making process by the decision maker(s) in the form of doubt, hesitancy, and procrastination.
Abstract: Capital budgeting as a decision process is among the most important of all management decisions. The importance (consequents) of the subsequent outcome may bring a level of uncertainty to the judgement-making process by the decision maker(s) in the form of doubt, hesitancy, and procrastination. This study considers one such problem, namely the selection of the type of fleet car to be adopted by a small car rental company, a selection that accounts for a large proportion of the companyi¦s working capital. With a number of criteria to consider, a fuzzy analytic hierarchy process (FAHP) analysis is undertaken to accommodate the inherent uncertainty. Developments are made to the FAHP method utilised to consider the preference results with differing levels of precision in the pairwise judgements made.

114 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: This work discusses a unifying formulation for labelled and unlabelled data that can incorporate constrained data for model fitting that is computationally efficient and generates superior groupings when compared with alternative techniques.
Abstract: Classification problems abundantly arise in many computer vision tasks eing of supervised, semi-supervised or unsupervised nature. Even when class labels are not available, a user still might favor certain grouping solutions over others. This bias can be expressed either by providing a clustering criterion or cost function and, in addition to that, by specifying pairwise constraints on the assignment of objects to classes. In this work, we discuss a unifying formulation for labelled and unlabelled data that can incorporate constrained data for model fitting. Our approach models the constraint information by the maximum entropy principle. This modeling strategy allows us (i) to handle constraint violations and soft constraints, and, at the same time, (ii) to speed up the optimization process. Experimental results on face classification and image segmentation indicates that the proposed algorithm is computationally efficient and generates superior groupings when compared with alternative techniques.

98 citations


Posted Content
TL;DR: In this article, the authors propose a method to estimate the deterministic component of bidder valuations and apply it to the 1995-1996 C-block auction, based on a pairwise stability condition, which implies that two bidders cannot exchange licenses in a way that increases total surplus.
Abstract: FCC spectrum auctions sell licenses to provide mobile phone service in designated geographic territories. We propose a method to structurally estimate the deterministic component of bidder valuations and apply it to the 1995-1996 C-block auction. We base our estimation of bidder values on a pairwise stability condition, which implies that two bidders cannot exchange licenses in a way that increases total surplus. Pairwise stability holds in many theoretical models of simultaneous ascending auctions, including some models of intimidatory collusion and demand reduction. Pairwise stability is also approximately satisfied in data that we examine from economic experiments. The lack of post-auction resale also suggests pairwise stability. Using our estimates of deterministic valuations, we measure the allocative efficiency of the C-block outcome.

98 citations


Book ChapterDOI
06 Nov 2005
TL;DR: In this article, a large scale dataset is developed based on real world case study namely, the web directories of Google, Looksmart and Yahoo!. And an empirical evaluation is performed which compares the most representative solutions for taxonomy matching.
Abstract: Matching hierarchical structures, like taxonomies or web directories, is the premise for enabling interoperability among heterogenous data organizations. While the number of new matching solutions is increasing the evaluation issue is still open. This work addresses the problem of comparison for pairwise matching solutions. A methodology is proposed to overcome the issue of scalability. A large scale dataset is developed based on real world case study namely, the web directories of Google, Looksmart and Yahoo!. Finally, an empirical evaluation is performed which compares the most representative solutions for taxonomy matching. We argue that the proposed dataset can play a key role in supporting the empirical analysis for the research effort in the area of taxonomy matching.

97 citations


Proceedings ArticleDOI
29 Aug 2005
TL;DR: A case-based framework for requirements prioritization is adopted, called case- based ranking, which exploits machine learning techniques to overcome the scalability problem and proves that on average this approach outperforms AHP with respect to the trade-off between expert elicitation effort and the requirement prioritization accuracy.
Abstract: Case-based driven approaches to requirements prioritization proved to be much more effective than first-principle methods in being tailored to a specific problem, that is they take advantage of the implicit knowledge that is available, given a problem representation. In these approaches, first-principle prioritization criteria are replaced by a pairwise preference elicitation process. Nevertheless case-based approaches, using the analytic hierarchy process (AHP) technique, become impractical when the size of the collection of requirements is greater than about twenty since the elicitation effort grows as the square of the number of requirements. We adopt a case-based framework for requirements prioritization, called case-based ranking, which exploits machine learning techniques to overcome the scalability problem. This method reduces the acquisition effort by combining human preference elicitation and automatic preference approximation. Our goal in this paper is to describe the framework in details and to present empirical evaluations which aim at showing its effectiveness in overcoming the scalability problem. The results prove that on average our approach outperforms AHP with respect to the trade-off between expert elicitation effort and the requirement prioritization accuracy.

89 citations


Journal ArticleDOI
TL;DR: This work provides necessary and sufficient conditions on the network link marginal payoffs such that the set of pairwise stable, pairwise-Nash and proper equilibrium networks coincide, where Pairwise stable networks are robust to one-link deviations, while pairwise
Abstract: Suppose that individual payoffs depend on the network connecting them. Consider the following simultaneous move game of network formation: players announce independently the links they wish to form, and links are formed only under mutual consent. We provide necessary and sufficient conditions on the network link marginal payoffs such that the set of pairwise stable, pairwise-Nash and proper equilibrium networks coincide, where pairwise stable networks are robust to one-link deviations, while pairwise-Nash networks are robust to one-link creation but multi-link severance. Under these conditions, proper equilibria in pure strategies are fully characterized by one-link deviation checks.

Journal ArticleDOI
TL;DR: In this article, a new approach to the statistical analysis of pairwise-present covariance structure data is proposed based on maximizing the complete data likelihood function, and the associated test statistic and standard errors are corrected for misspecification using Satorra-Bentler corrections.
Abstract: This article proposes a new approach to the statistical analysis of pairwise-present covariance structure data. The estimator is based on maximizing the complete data likelihood function, and the associated test statistic and standard errors are corrected for misspecification using Satorra-Bentler corrections. A Monte Carlo study was conducted to compare the proposed method (pairwise maximum likelihood [ML]) to 2 other methods for dealing with incomplete nonnormal data: direct ML estimation with the Yuan-Bentler corrections for nonnormality (direct ML) and the asymptotically distribution free (ADF) method applied to available cases (pairwise ADF). Data were generated from a 4-factor model with 4 indicators per factor; sample size varied from 200 to 5,000; data were either missing completely at random (MCAR) or missing at random (MAR); and the proportion of missingness was either 15% or 30%. Measures of relative performance included model fit, relative accuracy in parameter estimates and their standard err...

Journal ArticleDOI
TL;DR: A statistically sound two-stage co-expression detection algorithm that controls both statistical significance (false discovery rate, FDR) and biological significance (minimum acceptable strength, MAS) of the discovered co-expressions is designed and implemented.
Abstract: Motivation: Many exploratory microarray data analysis tools such as gene clustering and relevance networks rely on detecting pairwise gene co-expression. Traditional screening of pairwise co-expression either controls biological significance or statistical significance, but not both. The former approach does not provide stochastic error control, and the later approach screens many co-expressions with excessively low correlation. Methods: We have designed and implemented a statistically sound two-stage co-expression detection algorithm that controls both statistical significance (False Discovery Rate, FDR) and biological significance (Minimum Acceptable Strength, MAS) of the discovered co-expressions. Based on estimation of pairwise gene correlation, the algorithm provides an initial co-expression discovery that controls only FDR, which is then followed by a second stage co-expression discovery which controls both FDR and MAS. It also computes and thresholds the set of FDR p-values for each correlation that satisfied the MAS criterion. Results: We validated asymptotic null distributions of the Pearson and Kendall correlation coefficients and the twostage error-control procedure using simulated data. We then used yeast galactose metabolism data (Ideker et al. 2001) to illustrate the advantage of our method for clustering genes and constructing a relevance network. In gene clustering, the algorithm screens a seeded cluster of co-expressed genes with controlled FDR and MAS. In constructing the relevance network, the algorithm discovers a set of edges with controlled FDR and MAS. Availability: The method has been implemented in an R package ”GeneNT” that is freely available from: http://wwwpersonal.umich.edu/ v zhud/genent.htm.

Journal ArticleDOI
TL;DR: Novel mathematical programming models are constructed for FMADM problems under uncertainty, and corresponding solving methods are proposed.

Journal ArticleDOI
TL;DR: This work focuses on Bayesian model selection for the variable selection problem in large model spaces, and the issue of choice of prior distributions for the visited models is also important.
Abstract: We focus on Bayesian model selection for the variable selection problem in large model spaces. The challenge is to search the huge model space adequately, while accurately approximating model posterior probabilities for the visited models. The issue of choice of prior distributions for the visited models is also important.

Book ChapterDOI
05 Sep 2005
TL;DR: An exact model based on Markov chains is proposed to formulate the variation of gene frequency and reveals the pairwise equivalence phenomenon in the number of parents and indicates the acceleration of genetic drift in MPGAs.
Abstract: This paper investigates genetic drift in multi-parent genetic algorithms (MPGAs). An exact model based on Markov chains is proposed to formulate the variation of gene frequency. This model identifies the correlation between the adopted number of parents and the mean convergence time. Moreover, it reveals the pairwise equivalence phenomenon in the number of parents and indicates the acceleration of genetic drift in MPGAs. The good fit between theoretical and experimental results further verifies the capability of this model.

Journal ArticleDOI
15 May 2005
TL;DR: This work adapts a "one-test-at-a-time" greedy method to take importance of pairs into account, so that when run to completion all pairwise interactions are tested, but when terminated after any intermediate number of tests, those deemed most important are tested.
Abstract: Interaction testing is widely used in screening for faults. In software testing, it provides a natural mechanism for testing systems to be deployed on a variety of hardware and software configurations. Several algorithms published in the literature are used as tools to automatically generate these test suites; AETG is a well known example of a family of greedy algorithms that generate one test at a time. In many applications where interaction testing is needed, the entire test suite is not run as a result of time or cost constraints. In these situations, it is essential to prioritize the tests. Here we adapt a "one-test-at-a-time" greedy method to take importance of pairs into account. The method can be used to generate a set of tests in order, so that when run to completion all pairwise interactions are tested, but when terminated after any intermediate number of tests, those deemed most important are tested. Computational results on the method are reported.

Journal ArticleDOI
TL;DR: In this article, a pseudo-likelihood estimation method for the grouped continuous model and its extension to mixed ordinal and continuous data is proposed as an alternative to maximum likelihood estimation, which advocates simply pooling marginal pairwise likelihoods to approximate the full likelihood.

Journal ArticleDOI
TL;DR: An inferential strategy based on the pairwise likelihood, which only requires the computation of bivariate distributions, which has the potential to handle large data sets and improve on standard inferential procedures by means of bootstrap methods.
Abstract: Inference in generalized linear models with crossed effects is often made cumbersome by the high-dimensional intractable integrals involved in the likelihood function. We propose an inferential strategy based on the pairwise likelihood, which only requires the computation of bivariate distributions. The benefits of our approach are the simplicity of implementation and the potential to handle large data sets. The estimators based on the pairwise likelihood are generally consistent and asymptotically normally distributed. The pairwise likelihood makes it possible to improve on standard inferential procedures by means of bootstrap methods. The performance of the proposed methodology is illustrated by simulations and application to the well-known salamander mating data set.

Book ChapterDOI
26 Oct 2005
TL;DR: Results from fuzzy set theory and fuzzy morphology are used to extend the definitions of existing overlap measures to accommodate multiple fractional labels and a quantitative link between overlap and registration error is established.
Abstract: Effective validation techniques are an essential pre-requisite for segmentation and non-rigid registration techniques to enter clinical use. These algorithms can be evaluated by calculating the overlap of corresponding test and gold-standard regions. Common overlap measures compare pairs of binary labels but it is now common for multiple labels to exist and for fractional (partial volume) labels to be used to describe multiple tissue types contributing to a single voxel. Evaluation studies may involve multiple image pairs. In this paper we use results from fuzzy set theory and fuzzy morphology to extend the definitions of existing overlap measures to accommodate multiple fractional labels. Simple formulas are provided which define single figures of merit to quantify the total overlap for ensembles of pairwise or groupwise label comparisons. A quantitative link between overlap and registration error is established by defining the overlap tolerance. Experiments are performed on publicly available labeled brain data to demonstrate the new measures in a comparison of pairwise and groupwise registration.

01 Jan 2005
TL;DR: In this article, the connections between rank reversals and the potential inconsistency of the utility assessments in the case of ratio-scale pairwise comparisons data were investigated. And the analysis was carried out by recently developed statistical modelling techniques so that the inconsistency of utility assessments was measured according to statistical estimation theory.
Abstract: In multi-criteria decision analysis, the overall performance of decision alternatives is evaluated with respect to several, generally conflicting decision criteria. One approach to perform the multi-criteria decision analysis is to use ratio-scale pairwise comparisons concerning the performance of decision alternatives and the importance of decision criteria. In this approach, a classical problem has been the phenomenon of rank reversals. In particular, when a new decision alternative is added to a decision problem, and while the assessments concerning the original decision alternatives remain unchanged, the new alternative may cause rank reversals between the utility estimates of the original decision alternatives. This paper studies the connections between rank reversals and the potential inconsistency of the utility assessments in the case of ratio-scale pairwise comparisons data. The analysis was carried out by recently developed statistical modelling techniques so that the inconsistency of the assessments was measured according to statistical estimation theory. Several type of decision problems were analysed and the results showed that rank reversals caused by inconsistency are natural and acceptable. On the other hand, rank reversals caused by the traditional arithmetic-mean aggregation rule are not in line with the ratio-scale measurement of utilities, whereas geometric-mean aggregation does not cause undesired rank reversals.

Journal ArticleDOI
TL;DR: An overview of the well-known impossibility-possibility theorem in constructing a social welfare function from individual functions is given, which shows that it is possible to derive such a function in two ways.
Abstract: This paper gives a brief overview of the well-known impossibility-possibility theorem in constructing a social welfare function from individual functions. The Analytic Hierarchy Process uses a fundamental scale of absolute numbers to represent judgments about dominance in paired comparisons. It is shown that it is possible to derive such a function in two ways. One is from the synthesized functions of the judgments of each of the individuals. The other is obtained by first combining corresponding pairwise comparison judgments made by all the individuals, thus obtaining a matrix of combined judgments for the group and then deriving a welfare function for the group. With consistency the four conditions imposed by Arrow are satisfied. With inconsistency, an additional condition is needed.

Journal ArticleDOI
TL;DR: A hybrid model that uses concepts from fuzzy logic and analytical hierarchy process (AHP) is proposed and shows that this method provides intuitively promising results and that can be used for explaining route choice process of drivers.

Journal ArticleDOI
TL;DR: In this article, the authors define the concept of pairwise sensitivity with respect to initial conditions for the endomorphisms on Lebesgue metric spaces and compute the largest sensitivity constant.

Journal ArticleDOI
15 May 2005
TL;DR: This paper presents example system requirements and corresponding models for applying the combinatorial approach to those requirements, using terminology and modeling notation from the AETG1 system to provide concrete examples.
Abstract: The combinatorial approach to software testing uses models to generate a minimal number of test inputs so that selected combinations of input values are covered. The most common coverage criteria is two-way, or pairwise coverage of value combinations, though for higher confidence three-way or higher coverage may be required. This paper presents example system requirements and corresponding models for applying the combinatorial approach to those requirements. These examples are intended to serve as a tutorial for applying the combinatorial approach to software testing. Although this paper focuses on pairwise coverage, the discussion is equally valid when higher coverage criteria such as three-way (triples) are used. We use terminology and modeling notation from the AETG1 system to provide concrete examples.

Proceedings Article
18 Aug 2005
TL;DR: This work considers learning from a hypergraph, and develops a general framework which is applicable to classification and clustering for complex relational data and has applied to real-world web classification problems and obtained encouraging results.
Abstract: In many applications, relationships among objects of interest are more complex than pairwise. Simply approximating complex relationships as pairwise ones can lead to loss of information. An alternative for these applications is to analyze complex relationships among data directly, without the need to first represent the complex relationships into pairwise ones. A natural way to describe complex relationships is to use hypergraphs. A hypergraph is a graph in which edges can connect more than two vertices. Thus we consider learning from a hypergraph, and develop a general framework which is applicable to classification and clustering for complex relational data. We have applied our framework to real-world web classification problems and obtained encouraging results.

Proceedings ArticleDOI
17 Jul 2005
TL;DR: In this article, the relative motion between the frames of a video sequence is estimated by considering global consistency conditions for the overall multi-frame motion estimation problem, which is more accurate than the commonly applied pairwise image registration methods.
Abstract: We address the problem of estimating the relative motion between the frames of a video sequence. In comparison with the commonly applied pairwise image registration methods, we consider global consistency conditions for the overall multi-frame motion estimation problem, which is more accurate. We review the recent work on this subject and propose an optimal framework, which can apply the consistency conditions as both hard constraints in the estimation problem, or as soft constraints in the form of stochastic (Bayesian) priors. The framework is applicable to virtually any motion model and enables us to develop a robust approach, which is resilient against the effects of outliers and noise. The effectiveness of the proposed approach is confirmed by a super-resolution application on synthetic and real data sets

Journal ArticleDOI
TL;DR: A simulation method is proposed to generate a set of classifier outputs with specified individual accuracies and fixed pairwise agreement that can be used to study the behaviour of class-type combination methods such as voting rules over multiple dependent classifiers.

Journal ArticleDOI
01 Jan 2005
TL;DR: This work gives an overview of two distinct methods for parallelization that achieve asymptotic and practical advantages over traditional techniques, and analyzes the best choice among a subset of these methods across a broad range of parameters.
Abstract: Particle simulations in fields ranging from biochemistry to astrophysics require evaluation of the interactions between all pairs of particles separated by less than some fixed interaction radius. The extent to which such simulations can be parallelized has historically been limited by the time required for inter-processor communication. Recently, Snir (1) and Shaw (2) independently introduced two distinct methods for parallelization that achieve asymptotic and practical advantages over traditional techniques. We give an overview of these methods and show that they represent special cases of a more general class of methods. We describe other methods in this class that can confer advantages over any previously described method in terms of communication bandwidth and latency. Practically speaking, the best choice among the broad category of methods depends on such parameters as the interaction radius, the size of the simulated system, and the number of processors. We analyze the best choice among a subset of these methods across a broad range of parameters.

Book ChapterDOI
01 Jun 2005
TL;DR: In this article, the authors used insights from the literature on estimation of nonlinear panel data models to construct estimators of a number of semiparametric models with a partially linear index.
Abstract: This paper uses insights from the literature on estimation of nonlinear panel data models to construct estimators of a number of semiparametric models with a partially linear index, including the partially linear logit model, the partially linear censored regression model, and the censored regression model with selection.. We develop the relevant asymptotic theory for these estimators and we apply the theory to derive the asymptotic distribution of the estimator for the partially linear logit model. We evaluate the finite sample behavior of this estimator using a Monte Carlo study. ∗This research is supported by the National Science Foundation. We are grateful to Michael Jansson, Richard Juster, Ekaterini Kyriazidou, and Thomas Rothenberg for helpful comment on earlier versions. Part of this research was done while Honoré visited the Economics Department at the University of Copenhagen.