scispace - formally typeset
D

Diane Larlus

Researcher at Xerox

Publications -  82
Citations -  6174

Diane Larlus is an academic researcher from Xerox. The author has contributed to research in topics: Computer science & Object (computer science). The author has an hindex of 27, co-authored 69 publications receiving 4722 citations. Previous affiliations of Diane Larlus include Technische Universität Darmstadt & Naver Corporation.

Papers
More filters
Posted Content

Re-ID done right: towards good practices for person re-identification.

TL;DR: A qualitative analysis of the trained representation indicates that, while compact, it is able to capture information from localized and discriminative regions, in a manner akin to an implicit attention mechanism.
Posted Content

End-to-end Learning of Deep Visual Representations for Image Retrieval

TL;DR: This article uses a large-scale but noisy landmark dataset and develops an automatic cleaning method that produces a suitable training set for deep retrieval, and builds on the recent R-MAC descriptor, which can be interpreted as a deep and differentiable architecture, and presents improvements to enhance it.
Book ChapterDOI

Semi-convolutional Operators for Instance Segmentation

TL;DR: In this paper, the authors show theoretically and empirically that constructing dense pixel embeddings that can separate object instances cannot be easily achieved using convolutional operators, and they show that simple modifications, which they call semi-convolutional, have a much better chance of succeeding at this task.
Proceedings ArticleDOI

Combining appearance models and Markov Random Fields for category level object segmentation

TL;DR: The proposed method successfully segments object categories with highly varying appearances in the presence of cluttered backgrounds and large view point changes and outperforms published results on the Pascal VOC 2007 dataset.
Posted Content

Learning Visual Representations with Caption Annotations

TL;DR: It is argued that captioned images are easily crawlable and can be exploited to supervise the training of visual representations, and proposed hybrid models, with dedicated visual and textual encoders, show that the visual representations learned as a by-product of solving this task transfer well to a variety of target tasks.