Papers
More filters
••
13 Jun 2010TL;DR: The Cohn-Kanade (CK+) database is presented, with baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data.
Abstract: In 2000, the Cohn-Kanade (CK) database was released for the purpose of promoting research into automatically detecting individual facial expressions. Since then, the CK database has become one of the most widely used test-beds for algorithm development and evaluation. During this period, three limitations have become apparent: 1) While AU codes are well validated, emotion labels are not, as they refer to what was requested rather than what was actually performed, 2) The lack of a common performance metric against which to evaluate new algorithms, and 3) Standard protocols for common databases have not emerged. As a consequence, the CK database has been used for both AU and emotion detection (even though labels for the latter have not been validated), comparison with benchmark algorithms is missing, and use of random subsets of the original database makes meta-analyses difficult. To address these and other concerns, we present the Extended Cohn-Kanade (CK+) database. The number of sequences is increased by 22% and the number of subjects by 27%. The target expression for each sequence is fully FACS coded and emotion labels have been revised and validated. In addition to this, non-posed sequences for several types of smiles and their associated metadata have been added. We present baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data. The emotion and AU labels, along with the extended image data and tracked landmarks will be made available July 2010.
3,439 citations
••
16 Jun 2012TL;DR: A conceptually clear and intuitive algorithm for contrast-based saliency estimation that outperforms all state-of-the-art approaches and can be formulated in a unified way using high-dimensional Gaussian filters.
Abstract: Saliency estimation has become a valuable tool in image processing. Yet, existing approaches exhibit considerable variation in methodology, and it is often difficult to attribute improvements in result quality to specific algorithm properties. In this paper we reconsider some of the design choices of previous methods and propose a conceptually clear and intuitive algorithm for contrast-based saliency estimation. Our algorithm consists of four basic steps. First, our method decomposes a given image into compact, perceptually homogeneous elements that abstract unnecessary detail. Based on this abstraction we compute two measures of contrast that rate the uniqueness and the spatial distribution of these elements. From the element contrast we then derive a saliency measure that produces a pixel-accurate saliency map which uniformly covers the objects of interest and consistently separates fore- and background. We show that the complete contrast and saliency estimation can be formulated in a unified way using high-dimensional Gaussian filters. This contributes to the conceptual simplicity of our method and lends itself to a highly efficient implementation with linear complexity. In a detailed experimental evaluation we analyze the contribution of each individual feature and show that our method outperforms all state-of-the-art approaches.
1,711 citations
••
27 Jun 2016TL;DR: This work presents a new benchmark dataset and evaluation methodology for the area of video object segmentation, named DAVIS (Densely Annotated VIdeo Segmentation), and provides a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics.
Abstract: Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.
1,656 citations
••
03 Oct 2010TL;DR: The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface, which enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch.
Abstract: We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.
740 citations
••
15 Feb 2018TL;DR: This work analyzes four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them, and constructs a unified framework which enables a direct comparison, as well as an easier implementation.
Abstract: Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.
554 citations
Authors
Showing all 474 results
Name | H-index | Papers | Citations |
---|---|---|---|
Markus Gross | 91 | 588 | 32881 |
Peter W. Carr | 77 | 517 | 22507 |
Jessica K. Hodgins | 70 | 265 | 18655 |
Wojciech Matusik | 68 | 323 | 17101 |
Jodi Forlizzi | 67 | 237 | 17292 |
Scott E. Hudson | 63 | 261 | 13265 |
Ivan Poupyrev | 60 | 179 | 15443 |
Iain Matthews | 57 | 140 | 20326 |
Irfan Essa | 56 | 253 | 16635 |
Bhiksha Raj | 51 | 359 | 13064 |
Leonid Sigal | 50 | 220 | 11028 |
Marc Alexa | 49 | 180 | 13791 |
Ariel Shamir | 48 | 146 | 10116 |
Gunter Niemeyer | 47 | 153 | 17135 |
Paul Beardsley | 46 | 137 | 7091 |