Institution
French Institute for Research in Computer Science and Automation
Government•Le Chesnay, France•
About: French Institute for Research in Computer Science and Automation is a government organization based out in Le Chesnay, France. It is known for research contribution in the topics: Population & Cluster analysis. The organization has 13012 authors who have published 38653 publications receiving 1318995 citations. The organization is also known as: INRIA & Institute for national research in information science and automatic control.
Papers published on a yearly basis
Papers
More filters
••
20 Jun 2011TL;DR: This work introduces a novel descriptor based on motion boundary histograms, which is robust to camera motion and consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos.
Abstract: Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.
2,383 citations
••
14 Jun 2009
TL;DR: A new online optimization algorithm for dictionary learning is proposed, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.
Abstract: Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.
2,313 citations
••
TL;DR: The main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes.
2,277 citations
•
TL;DR: A new online optimization algorithm is proposed, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems.
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.
2,256 citations
••
Columbia University1, University of Oxford2, New York University3, Nathan Kline Institute for Psychiatric Research4, University College London5, University of Pennsylvania6, University of California, Los Angeles7, University of Iowa8, McGill University9, French Institute of Health and Medical Research10, French Institute for Research in Computer Science and Automation11, John Radcliffe Hospital12, Imperial College London13, Mauna Kea Technologies14
TL;DR: This study is the largest evaluation of nonlinear deformation algorithms applied to brain image registration ever conducted and suggests that the findings are generalizable to new subject populations that are labeled or evaluated using different labeling protocols.
2,214 citations
Authors
Showing all 13078 results
Name | H-index | Papers | Citations |
---|---|---|---|
Cordelia Schmid | 135 | 464 | 103925 |
Bernt Schiele | 130 | 568 | 70032 |
Francis Bach | 110 | 484 | 54944 |
Jian Sun | 109 | 360 | 239387 |
Pascal Fua | 102 | 614 | 49751 |
Nicholas Ayache | 97 | 624 | 43140 |
Olivier Bernard | 96 | 790 | 37878 |
Laurent D. Cohen | 94 | 417 | 42709 |
Peter Sturm | 93 | 548 | 39119 |
Guy Orban | 93 | 455 | 26178 |
Sebastien Ourselin | 91 | 1116 | 34683 |
François Fleuret | 91 | 936 | 42585 |
Katrin Amunts | 89 | 438 | 35069 |
Tamer Basar | 88 | 977 | 34903 |
Nassir Navab | 88 | 1375 | 41537 |