scispace - formally typeset
Search or ask a question
Topic

Discriminative model

About: Discriminative model is a research topic. Over the lifetime, 16926 publications have been published within this topic receiving 558663 citations.


Papers
More filters
Proceedings ArticleDOI
Michael Collins1
06 Jul 2002
TL;DR: Experimental results on part-of-speech tagging and base noun phrase chunking are given, in both cases showing improvements over results for a maximum-entropy tagger.
Abstract: We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.

2,221 citations

Journal ArticleDOI
TL;DR: FCS is a semi-parametric and flexible alternative that specifies the multivariate model by a series of conditional models, one for each incomplete variable, but its statistical properties are difficult to establish.
Abstract: The goal of multiple imputation is to provide valid inferences for statistical estimates from incomplete data. To achieve that goal, imputed values should preserve the structure in the data, as well as the uncertainty about this structure, and include any knowledge about the process that generated the missing data. Two approaches for imputing multivariate data exist: joint modeling (JM) and fully conditional specification (FCS). JM is based on parametric statistical theory, and leads to imputation procedures whose statistical properties are known. JM is theoretically sound, but the joint model may lack flexibility needed to represent typical data features, potentially leading to bias. FCS is a semi-parametric and flexible alternative that specifies the multivariate model by a series of conditional models, one for each incomplete variable. FCS provides tremendous flexibility and is easy to apply, but its statistical properties are difficult to establish. Simulation work shows that FCS behaves very well in ...

2,119 citations

Journal ArticleDOI
TL;DR: It is shown that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems and can therefore lead to a more robust tracker with fewer parameter tweaks.
Abstract: In this paper, we address the problem of tracking an object in a video given its location in the first frame and no other information. Recently, a class of tracking techniques called “tracking by detection” has been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrade the classifier and can cause drift. In this paper, we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems and can therefore lead to a more robust tracker with fewer parameter tweaks. We propose a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. We present thorough experimental results (both qualitative and quantitative) on a number of challenging video clips.

2,101 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss the current research in building models of conditional variances using the Autoregressive Conditional Heteroskedastic (ARCH) and Generalized ARCH (GARCH) formulations.
Abstract: This paper will discuss the current research in building models of conditional variances using the Autoregressive Conditional Heteroskedastic (ARCH) and Generalized ARCH (GARCH) formulations. The discussion will be motivated by a simple asset pricing theory which is particularly appropriate for examining futures contracts with risk averse agents. A new class of models defined to be integrated in variance is then introduced. This new class of models includes the variance analogue of a unit root in the mean as a special case. The models are argued to be both theoretically important for the asset pricing models and empirically relevant. The conditional density is then generalized from a normal to a Student-t with unknown degrees of freedom. By estimating the degrees of freedom, implications about the conditional kurtosis of these models and time aggregated models can be drawn. A further generalization allows the conditional variance to be a non-linear function of the squared innovations. Throughout empirical...

2,055 citations

Proceedings ArticleDOI
17 Jun 2007
TL;DR: This work shows that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms, and proposes to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images.
Abstract: Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object/scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.

1,874 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
92% related
Deep learning
79.8K papers, 2.1M citations
91% related
Feature extraction
111.8K papers, 2.1M citations
90% related
Feature (computer vision)
128.2K papers, 1.7M citations
89% related
Image segmentation
79.6K papers, 1.8M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20232,384
20224,963
20211,844
20201,877
20191,758