scispace - formally typeset
Search or ask a question
Author

J Eichhorn

Bio: J Eichhorn is an academic researcher from Max Planck Society. The author has contributed to research in topics: Support vector machine & Principal component analysis. The author has an hindex of 5, co-authored 7 publications receiving 534 citations.

Papers
More filters
Book ChapterDOI
11 Apr 2005
TL;DR: The PASCAL Visual Object Classes Challenge (PASCALVOC) as mentioned in this paper was held from February to March 2005 to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects).
Abstract: The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets, algorithms used by the teams, evaluation criteria, and results achieved.

381 citations

Journal ArticleDOI
TL;DR: This finding suggests that, although the amount of higher-order correlation in natural images can in fact be significant, the feature of orientation selectivity does not yield a large contribution to redundancy reduction within the linear filter bank models of V1 simple cells.
Abstract: Orientation selectivity is the most striking feature of simple cell coding in V1 that has been shown to emerge from the reduction of higher-order correlations in natural images in a large variety of statistical image models. The most parsimonious one among these models is linear Independent Component Analysis (ICA), whereas second-order decorrelation transformations such as Principal Component Analysis (PCA) do not yield oriented filters. Because of this finding, it has been suggested that the emergence of orientation selectivity may be explained by higher-order redundancy reduction. To assess the tenability of this hypothesis, it is an important empirical question how much more redundancy can be removed with ICA in comparison to PCA or other second-order decorrelation methods. Although some previous studies have concluded that the amount of higher-order correlation in natural images is generally insignificant, other studies reported an extra gain for ICA of more than 100%. A consistent conclusion about the role of higher-order correlations in natural images can be reached only by the development of reliable quantitative evaluation methods. Here, we present a very careful and comprehensive analysis using three evaluation criteria related to redundancy reduction: In addition to the multi-information and the average log-loss, we compute complete rate–distortion curves for ICA in comparison with PCA. Without exception, we find that the advantage of the ICA filters is small. At the same time, we show that a simple spherically symmetric distribution with only two parameters can fit the data significantly better than the probabilistic model underlying ICA. This finding suggests that, although the amount of higher-order correlation in natural images can in fact be significant, the feature of orientation selectivity does not yield a large contribution to redundancy reduction within the linear filter bank models of V1 simple cells.

90 citations

01 Jul 2004
TL;DR: An efficient image representation based on local descriptors based on kernels defined on sets of vectors with a Support Vector Machine classifier in order to perform object categorization is proposed.
Abstract: In this paper, we propose to combine an efficient image representation based on local descriptors with a Support Vector Machine classifier in order to perform object categorization. For this purpose, we apply kernels defined on sets of vectors. After testing different combinations of kernel / local descriptors, we have been able to identify a very performant one.

63 citations

Proceedings Article
09 Dec 2003
TL;DR: This work reports and compares the performance of different learning algorithms based on data from cortical recordings to predict the orientation of visual stimuli from the activity of a population of simultaneously recorded neurons.
Abstract: We report and compare the performance of different learning algorithms based on data from cortical recordings. The task is to predict the orientation of visual stimuli from the activity of a population of simultaneously recorded neurons. We compare several ways of improving the coding of the input (i.e., the spike data) as well as of the output (i.e., the orientation), and report the results obtained using different kernel algorithms.

28 citations

DissertationDOI
J Eichhorn1
05 Mar 2007
TL;DR: The development and adaptation of kernel functions for decoding of neural activity and for image categorisation and an application of support vector machines as one prominent example of kernel algorithms to the task of object categorisation are presented.
Abstract: In this thesis we are concerned with the application of supervised learning methods to two problems of rather different nature – one originating from computational neuro-science, the other one from computer vision. The kernel algorithms that will be used allow classification of complex objects that need not to be elements of a Euclidean vector space. For example in the applications presented below these objects are time series of neural activity and images described by a collection of local descriptors. The flexibility of kernel algorithms is achieved through the use of a kernel function that specifies similarity of the objects as a numerical value. To make an application successful, one has to find appropriate kernel functions that adequately describe similarity and at the same time must fulfil certain mathematical requirements. The focus of our work is the development and adaptation of kernel functions for decoding of neural activity and for image categorisation. Each topic is treated separately in one of the two parts of this thesis. In part I the application of kernel algorithms for decoding of neural activity is explored. Sequences of action potentials that were measured as response to a visual stimulus are used to reconstruct characteristic attributes of the stimulus. Most of the current methods in neuroscience consider only the number of action potentials in a certain time interval and neglect the temporal distribution of these events. With the kernel functions for neural activity that are proposed in this thesis an extended analysis of spike trains is possible. The similarity of two sequences is not only determined by the frequency of spikes but also takes potential temporal patterns into account. An evaluation of the kernels is performed on artificially generated data as well as on real recordings from a neurophysiological experiment. Experiments on this second type of data allow some conclusions about the actual importance of temporal patterns for the encoding of stimulus attributes in the organism under consideration. In a second set of experiments the simultaneously recorded activity of multiple neurons is taken as a basis of reconstruction. Here the results show that the tested kernel algorithms can perform reconstruction in most cases with a significantly higher precision than current methods of computational neuroscience. The second part of this thesis presents an application of support vector machines as one prominent example of kernel algorithms to the task of object categorisation. Computer vision research has found that it …

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The state-of-the-art in evaluated methods for both classification and detection are reviewed, whether the methods are statistically different, what they are learning from the images, and what the methods find easy or confuse.
Abstract: The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.

15,935 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Journal ArticleDOI
TL;DR: In this article, a large collection of images with ground truth labels is built to be used for object detection and recognition research, such data is useful for supervised learning and quantitative evaluation.
Abstract: We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web.

3,501 citations

01 Jan 2006

3,012 citations

Proceedings ArticleDOI
25 Oct 2008
TL;DR: This work explores the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web, and proposes a technique for bias correction that significantly improves annotation quality on two tasks.
Abstract: Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense.

2,237 citations