scispace - formally typeset
Search or ask a question
Institution

French Institute for Research in Computer Science and Automation

GovernmentLe Chesnay, France
About: French Institute for Research in Computer Science and Automation is a government organization based out in Le Chesnay, France. It is known for research contribution in the topics: Population & Cluster analysis. The organization has 13012 authors who have published 38653 publications receiving 1318995 citations. The organization is also known as: INRIA & Institute for national research in information science and automatic control.


Papers
More filters
Proceedings ArticleDOI
20 Jun 2011
TL;DR: This work introduces a novel descriptor based on motion boundary histograms, which is robust to camera motion and consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos.
Abstract: Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.

2,383 citations

Proceedings ArticleDOI
14 Jun 2009
TL;DR: A new online optimization algorithm for dictionary learning is proposed, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.
Abstract: Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.

2,313 citations

Journal ArticleDOI
TL;DR: The main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes.

2,277 citations

Posted Content
TL;DR: A new online optimization algorithm is proposed, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems.
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.

2,256 citations


Authors

Showing all 13078 results

NameH-indexPapersCitations
Cordelia Schmid135464103925
Bernt Schiele13056870032
Francis Bach11048454944
Jian Sun109360239387
Pascal Fua10261449751
Nicholas Ayache9762443140
Olivier Bernard9679037878
Laurent D. Cohen9441742709
Peter Sturm9354839119
Guy Orban9345526178
Sebastien Ourselin91111634683
François Fleuret9193642585
Katrin Amunts8943835069
Tamer Basar8897734903
Nassir Navab88137541537
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

94% related

Google
39.8K papers, 2.1M citations

93% related

Carnegie Mellon University
104.3K papers, 5.9M citations

93% related

Eindhoven University of Technology
52.9K papers, 1.5M citations

90% related

Polytechnic University of Catalonia
45.3K papers, 949.3K citations

90% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202328
2022149
20211,374
20201,499
20191,637
20181,597