scispace - formally typeset
Search or ask a question

Showing papers by "Laurens van der Maaten published in 2011"


Proceedings Article
28 Jun 2011
TL;DR: A technique is presented that exploits label information to improve the object representation of Fisher kernels by employing ideas from metric learning and shows the strong performance of classifiers trained on the resulting object representations on problems in handwriting recognition, speech recognition, facial expression analysis, and bio-informatics.
Abstract: Fisher kernels provide a commonly used vectorial representation of structured objects. The paper presents a technique that exploits label information to improve the object representation of Fisher kernels by employing ideas from metric learning. In particular, the new technique trains a generative model in such a way that the distance between the log-likelihood gradients induced by two objects with the same label is as small as possible, and the distance between the gradients induced by two objects with different labels is as large as possible. We illustrate the strong performance of classifiers trained on the resulting object representations on problems in handwriting recognition, speech recognition, facial expression analysis, and bio-informatics.

87 citations


Proceedings Article
14 Jun 2011
TL;DR: This paper explores a generalization of conditional random elds (CRFs) in which binary stochastic hidden units appear between the data and the labels, and derives ecient algorithms for inference and learning in these models by observing that the hidden units are conditionally independent given the dataand the labels.
Abstract: The paper explores a generalization of conditional random elds (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units in these models also learn to discover latent distributed structure in the data that improves classication. We derive ecient algorithms for inference and learning in these models by observing that the hidden units are conditionally independent given the data and the labels. Finally, we show that hiddenunit CRFs perform well in experiments on a range of tasks, including optical character recognition, text classication, protein structure prediction, and part-of-speech tagging.

79 citations


01 Jan 2011
TL;DR: In this article, a feature point based stereo matching algorithm with global energy minimization is presented, where the initial disparity map is estimated by considering matching SURF key points between two images inside each homogeneous colour region by an adaptive box matching approach.
Abstract: In this paper, we present a novel feature point based stereo matching algorithm with global energy minimization. The initial disparity map is estimated by considering matching SURF key points between two images inside each homogeneous colour region by an adaptive box matching approach. Next, we improve the initial disparity map with a RANSAC based plane tting technique which relies on accuracy of the pixel disparities inside the homogeneous colour regions. Finally, the disparity map is further enhanced by incorporating energy constraints on smoothness between neighbouring regions using graph cuts. The methodology is tested on Middlebury test set and the results indicates that the proposed method is compatible with the current state-of-the-art stereo matching algorithms while decreasing the overall computational eort in matching process using already available feature points.

7 citations


Proceedings Article
14 Jun 2011
TL;DR: Since the introduction of LLE (Roweis and Saul, 2000) and Isomap (Tenenbaum et al., 2000), a large number of non-linear dimensionality reduction techniques (manifold learners) have been proposed and can be viewed as instantiations of Kernel PCA.
Abstract: Since the introduction of LLE (Roweis and Saul, 2000) and Isomap (Tenenbaum et al., 2000), a large number of non-linear dimensionality reduction techniques (manifold learners) have been proposed. Many of these non-linear techniques can be viewed as instantiations of Kernel PCA; they employ a cleverly designed kernel matrix that preserves local data structure in the “feature space” (Bengio et al., 2004). The kernel matrices of the first manifold learners were handcrafted: for instance, LLE uses an inverse squared graph Laplacian of the reconstruction weight matrix as kernel matrix, Isomap uses a centered geodesic distance matrix, and Laplacian Eigenmaps uses an inverse neighborhood graph Laplacian (Belkin and Niyogi, 2002).

1 citations