A
Andrew Zisserman
Researcher at University of Oxford
Publications - 808
Citations - 312028
Andrew Zisserman is an academic researcher from University of Oxford. The author has contributed to research in topics: Convolutional neural network & Real image. The author has an hindex of 167, co-authored 808 publications receiving 261717 citations. Previous affiliations of Andrew Zisserman include University of Edinburgh & Microsoft.
Papers
More filters
Proceedings ArticleDOI
Efficient recognition of rotationally symmetric surfaces and straight homogeneous generalized cylinders
TL;DR: The recognition technique is shown to extend to the case of straight homogeneous generalized cylinders, and the stability of the cross-ratios is reported and compared to affine invariants.
Book ChapterDOI
Affine and Projective Structure from Motion
TL;DR: In this article, the affine and projective group is used to recover 3D structure from multiple images, without requiring knowledge of camera intrinsic parameters or camera motion, and the structure is recovered up to a transformation by a 3D linear group.
Book ChapterDOI
Surface reconstruction from multiple views using apparent contours and surface texture
Geoffrey Cross,Andrew Zisserman +1 more
TL;DR: A novel approach to reconstructing the complete surface of an object from multiple views, where the camera circumnavigates the object, which combines the information available from the apparent contour with the information Available from the imaged surface texture.
Book ChapterDOI
Single-Histogram class models for image segmentation
TL;DR: In this paper, a bag of visual words histograms is used to represent an object class by a set of histograms, each one corresponding to a training exemplar, and classification is then achieved by k-nearest neighbor search over the exemplars.
Proceedings ArticleDOI
Slow-Fast Auditory Streams for Audio Recognition
TL;DR: In this article, a two-stream convolutional network for audio recognition is proposed, which operates on time-frequency spectrogram inputs and achieves state-of-the-art results on both VGG-Sound and EPIC-KITCHENS-100 datasets.