scispace - formally typeset
Search or ask a question
Author

Andrew Zisserman

Other affiliations: University of Edinburgh, Microsoft, University of Leeds  ...read more
Bio: Andrew Zisserman is an academic researcher from University of Oxford. The author has contributed to research in topics: Real image & Convolutional neural network. The author has an hindex of 167, co-authored 808 publications receiving 261717 citations. Previous affiliations of Andrew Zisserman include University of Edinburgh & Microsoft.


Papers
More filters
01 Jan 2010
TL;DR: In this article, a combination of an image-level dense visual words classifier and an object-level part-based detector was used for semantic indexing, which yielded a significantly different performance depending on the feature, as expected by their design.
Abstract: Our team participated in the “light” version of the semantic indexing task. All runs used a combination of an image-level dense visual words classifier and an object-level part based detector. For each of the ten features, these two methods were ranked based on their performance on a validation set and associated to successive runs by decreasing performance (we also used a number of different techniques to recombine the scores). The two methods yielded a significantly different performance depending on the feature, as expected by their design: The 2 -SVM can be used for all feature types, including scene-like features such as Cityscape, Nighttime, Singing, but is outperformed by the object detector for object-like features, such as Boat or ship, Bus, and Person riding a bicycle. Our team did not participate in the collaborative annotation effort. Instead, annotations were carried out internally for all the ten features to control quality and keyframe extraction, and to obtain region-of-interest annotations to train the object detectors. Compared to last year, the image-level classifier was significantly faster due to the use of a fast dense SIFT feature extractor and of an explicit feature map to approximate the 2 kernel SVM.

6 citations

Journal ArticleDOI
01 May 1987
TL;DR: An algorithm is described for detecting and localizing depth discontinuities in sparse stereo depth data obtained from textured regions of images and results are given for discontinuity detection and geometric description (planes only) on both simulated and real images.
Abstract: An algorithm is described for detecting and localizing depth discontinuities in sparse stereo depth data obtained from textured regions of images. The discontinuities are detected by fitting a membrane over the sparse depth data and running an edge detector over the reconstructed surface. Multigrid techniques are used in the surface reconstruction. The discontinuities are used to aid segmentation of the textured region into closed subregions. A geometric description of each subregion can be obtained by fitting planes, spheres, etc., to the stereo depth data. Results are given for discontinuity detection and geometric description (planes only) on both simulated and real images.

5 citations

Book
01 Jan 2008
TL;DR: Image Segmentation in the Presence of Shadows and Highlights, and Key Object Driven Multi-category Object Recognition, Localization and Tracking and Stereo Matching.
Abstract: Segmentation.- Image Segmentation in the Presence of Shadows and Highlights.- Image Segmentation by Branch-and-Mincut.- What Is a Good Image Segment? A Unified Approach to Segment Extraction.- Computational Photography.- Light-Efficient Photography.- Flexible Depth of Field Photography.- Priors for Large Photo Collections and What They Reveal about Cameras.- Understanding Camera Trade-Offs through a Bayesian Analysis of Light Field Projections.- Poster Session IV.- CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching.- Searching the World's Herbaria: A System for Visual Identification of Plant Species.- A Column-Pivoting Based Strategy for Monomial Ordering in Numerical Grobner Basis Calculations.- Co-recognition of Image Pairs by Data-Driven Monte Carlo Image Exploration.- Movie/Script: Alignment and Parsing of Video and Text Transcription.- Using 3D Line Segments for Robust and Efficient Change Detection from Multiple Noisy Images.- Action Recognition with a Bio-inspired Feedforward Motion Processing Model: The Richness of Center-Surround Interactions.- Linking Pose and Motion.- Automated Delineation of Dendritic Networks in Noisy Image Stacks.- Calibration from Statistical Properties of the Visual World.- Regular Texture Analysis as Statistical Model Selection.- Higher Dimensional Affine Registration and Vision Applications.- Semantic Concept Classification by Joint Semi-supervised Learning of Feature Subspaces and Support Vector Machines.- Learning from Real Images to Model Lighting Variations for Face Images.- Toward Global Minimum through Combined Local Minima.- Differential Spatial Resection - Pose Estimation Using a Single Local Image Feature.- Riemannian Anisotropic Diffusion for Tensor Valued Images.- FaceTracer: A Search Engine for Large Collections of Images with Faces.- What Does the Sky Tell Us about the Camera?.- Three Dimensional Curvilinear Structure Detection Using Optimally Oriented Flux.- Scene Segmentation for Behaviour Correlation.- Robust Visual Tracking Based on an Effective Appearance Model.- Key Object Driven Multi-category Object Recognition, Localization and Tracking Using Spatio-temporal Context.- A Pose-Invariant Descriptor for Human Detection and Segmentation.- Texture-Consistent Shadow Removal.- Scene Discovery by Matrix Factorization.- Simultaneous Detection and Registration for Ileo-Cecal Valve Detection in 3D CT Colonography.- Constructing Category Hierarchies for Visual Recognition.- Sample Sufficiency and PCA Dimension for Statistical Shape Models.- Locating Facial Features with an Extended Active Shape Model.- Dynamic Integration of Generalized Cues for Person Tracking.- Extracting Moving People from Internet Videos.- Multiple Instance Boost Using Graph Embedding Based Decision Stump for Pedestrian Detection.- Object Detection from Large-Scale 3D Datasets Using Bottom-Up and Top-Down Descriptors.- Making Background Subtraction Robust to Sudden Illumination Changes.- Closed-Form Solution to Non-rigid 3D Surface Registration.- Implementing Decision Trees and Forests on a GPU.- General Imaging Geometry for Central Catadioptric Cameras.- Estimating Radiometric Response Functions from Image Noise Variance.- Solving Image Registration Problems Using Interior Point Methods.- 3D Face Model Fitting for Recognition.- A Multi-scale Vector Spline Method for Estimating the Fluids Motion on Satellite Images.- Continuous Energy Minimization Via Repeated Binary Fusion.- Unified Crowd Segmentation.- Quick Shift and Kernel Methods for Mode Seeking.- A Fast Algorithm for Creating a Compact and Discriminative Visual Codebook.- A Dynamic Conditional Random Field Model for Joint Labeling of Object and Scene Classes.- Local Regularization for Multiclass Classification Facing Significant Intraclass Variations.- Saliency Based Opportunistic Search for Object Part Extraction and Labeling.- Stereo Matching: An Outlier Confidence Approach.- Improving Shape Retrieval by Learning Graph Transduction.- Cat Head Detection - How to Effectively Exploit Shape and Texture Features.- Motion Context: A New Representation for Human Action Recognition.- Active Reconstruction.- Temporal Dithering of Illumination for Fast Active Vision.- Compressive Structured Light for Recovering Inhomogeneous Participating Media.- Passive Reflectometry.- Fusion of Feature- and Area-Based Information for Urban Buildings Modeling from Aerial Imagery.

5 citations

Posted Content
TL;DR: In this paper, the authors proposed a network architecture which aggregates and embeds the face descriptors produced by deep convolutional neural networks into a compact fixed-length representation.
Abstract: The objective of this paper is to learn a compact representation of image sets for template-based face recognition. We make the following contributions: first, we propose a network architecture which aggregates and embeds the face descriptors produced by deep convolutional neural networks into a compact fixed-length representation. This compact representation requires minimal memory storage and enables efficient similarity computation. Second, we propose a novel GhostVLAD layer that includes {\em ghost clusters}, that do not contribute to the aggregation. We show that a quality weighting on the input faces emerges automatically such that informative images contribute more than those with low quality, and that the ghost clusters enhance the network's ability to deal with poor quality images. Third, we explore how input feature dimension, number of clusters and different training techniques affect the recognition performance. Given this analysis, we train a network that far exceeds the state-of-the-art on the IJB-B face recognition dataset. This is currently one of the most challenging public benchmarks, and we surpass the state-of-the-art on both the identification and verification protocols.

5 citations

Proceedings Article
01 Jan 2019
TL;DR: This paper considers the setting that cameras can be well approximated as static, e.g. in video surveillance scenarios, and scene pseudo depth maps can be inferred easily from the object scale on the image plane, and proposes a geometry-aware model for video object detection.
Abstract: In this paper we propose a geometry-aware model for video object detection. Specifically, we consider the setting that cameras can be well approximated as static, e.g. in video surveillance scenarios, and scene pseudo depth maps can therefore be inferred easily from the object scale on the image plane. We make the following contributions: First, we extend the recent anchor-free detector (CornerNet [17]) to video object detections. In order to exploit the spatial-temporal information while maintaining high efficiency, the proposed model accepts video clips as input, and only makes predictions for the starting and the ending frames, i.e. heatmaps of object bounding box corners and the corresponding embeddings for grouping. Second, to tackle the challenge from scale variations in object detection, scene geometry information, e.g. derived depth maps, is explicitly incorporated into deep networks for multi-scale feature selection and for the network prediction. Third, we validate the proposed architectures on an autonomous driving dataset generated from the Carla simulator [5], and on a real dataset for human detection (DukeMTMC dataset [28]). When comparing with the existing competitive single-stage or two-stage detectors, the proposed geometry-aware spatio-temporal network achieves significantly better results.

5 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations