scispace - formally typeset
Search or ask a question
Author

Andrew Zisserman

Other affiliations: University of Edinburgh, Microsoft, University of Leeds  ...read more
Bio: Andrew Zisserman is an academic researcher from University of Oxford. The author has contributed to research in topics: Real image & Convolutional neural network. The author has an hindex of 167, co-authored 808 publications receiving 261717 citations. Previous affiliations of Andrew Zisserman include University of Edinburgh & Microsoft.


Papers
More filters
Posted Content
TL;DR: The best performing model improves the state-of-the-art word error rate on the challenging BBC-Oxford Lip Reading Sentences 2 (LRS2) benchmark dataset by over 20 percent.
Abstract: The goal of this paper is to develop state-of-the-art models for lip reading -- visual speech recognition. We develop three architectures and compare their accuracy and training times: (i) a recurrent model using LSTMs; (ii) a fully convolutional model; and (iii) the recently proposed transformer model. The recurrent and fully convolutional models are trained with a Connectionist Temporal Classification loss and use an explicit language model for decoding, the transformer is a sequence-to-sequence model. Our best performing model improves the state-of-the-art word error rate on the challenging BBC-Oxford Lip Reading Sentences 2 (LRS2) benchmark dataset by over 20 percent. As a further contribution we investigate the fully convolutional model when used for online (real time) lip reading of continuous speech, and show that it achieves high performance with low latency.

65 citations

Proceedings ArticleDOI
21 Jul 2002
TL;DR: Environment matting is a powerful technique for modelling the complex light-transport properties of real-world optically active elements: transparent, refractive and reflective objects.
Abstract: Environment matting is a powerful technique for modelling the complex light-transport properties of real-world optically active elements: transparent, refractive and reflective objects. Zongker et al [1999] and Chuang et al [2000] show how environment mattes can be computed for real objects under carefully controlled laboratory conditions. However, for many objects of interest, such calibration is difficult to arrange. For example, we might wish to determine the distortion caused by filming through an ancient window where the glass has flowed; we may have access only to archive footage; or we might simply want a more convenient means of acquiring the matte.

65 citations

Book ChapterDOI
02 Jun 1998
TL;DR: It is shown that the homography between the images induced by the plane of the curve can be computed from two views given only the epipolar geometry, and that the trifocal tensor can be used to transfer a conic or the curvature from twoViews to a third.
Abstract: In this paper there are two innovations. First, the geometry of imaged curves is developed in two and three views. A set of results are given for both conics and non-algebraic curves. It is shown that the homography between the images induced by the plane of the curve can be computed from two views given only the epipolar geometry, and that the trifocal tensor can be used to transfer a conic or the curvature from two views to a third. The second innovation is an algorithm for automatically matching individual curves between images. The algorithm uses both photometric information and the multiple view geometric results. For image pairs the homography facilitates the computation of a neighbourhood cross-correlation based matching score for putative curve correspondences. For image triplets cross-correlation matching scores are used in conjunction with curve transfer based on the trifocal geometry to disambiguate matches. Algorithms are developed for both short and wide baselines. The algorithms are robust to deficiencies in the curve segment extraction and partial occlusion. Experimental results are given for image pairs and triplets, for varying motions between views, and for different scene types. The method is applicable to curve matching in stereo and trinocular rigs, and as a starting point for curve matching through monocular image sequences.

64 citations

Posted Content
TL;DR: A simple baseline for action localization on the AVA dataset is introduced, built upon the Faster R-CNN bounding box detection framework, adapted to operate on pure spatiotemporal features - in this case produced exclusively by an I3D model pretrained on Kinetics.
Abstract: We introduce a simple baseline for action localization on the AVA dataset. The model builds upon the Faster R-CNN bounding box detection framework, adapted to operate on pure spatiotemporal features - in our case produced exclusively by an I3D model pretrained on Kinetics. This model obtains 21.9% average AP on the validation set of AVA v2.1, up from 14.5% for the best RGB spatiotemporal model used in the original AVA paper (which was pretrained on Kinetics and ImageNet), and up from 11.3 of the publicly available baseline using a ResNet101 image feature extractor, that was pretrained on ImageNet. Our final model obtains 22.8%/21.9% mAP on the val/test sets and outperforms all submissions to the AVA challenge at CVPR 2018.

64 citations

Proceedings ArticleDOI
01 Sep 2009
TL;DR: An efficient object retrieval system based on the identification of abstract deformable ‘shape’ classes using the self-similarity descriptor is presented, and is shown to be superior to appearancebased approaches for matching non-rigid shape classes.
Abstract: We present an efficient object retrieval system based on the identification of abstract deformable ‘shape’ classes using the self-similarity descriptor of Shechtman and Irani [13]. Given a user-specified query object, we retrieve other images which share a common ‘shape’ even if their appearance differs greatly in terms of colour, texture, edges and other common photometric properties. In order to use the self-similarity descriptor for efficient retrieval we make three contributions: (i) we sparsify the descriptor points by locating discriminative regions within each image, thus reducing the computational expense of shape matching; (ii) we extend [13] to enable matching despite changes in scale; and (iii) we show that vector quantizing the descriptor does not inhibit performance, thus providing the basis of a large-scale shape-based retrieval system using a bag-of-visual-words approach. Performance is demonstrated on the challenging ETHZ deformable shape dataset and a full episode from the television series Lost, and is shown to be superior to appearancebased approaches for matching non-rigid shape classes.

64 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations