scispace - formally typeset
A

Andrew Zisserman

Researcher at University of Oxford

Publications -  808
Citations -  312028

Andrew Zisserman is an academic researcher from University of Oxford. The author has contributed to research in topics: Convolutional neural network & Real image. The author has an hindex of 167, co-authored 808 publications receiving 261717 citations. Previous affiliations of Andrew Zisserman include University of Edinburgh & Microsoft.

Papers
More filters
Proceedings Article

Segmenting Scenes by Matching Image Composites

TL;DR: This paper performs MRF-based segmentation that optimizes over matches, while respecting boundary information, and shows improved performance in detecting the principal occluding and contact boundaries for the scene over previous methods on data gathered from the LabelMe database.
Proceedings ArticleDOI

New approach to obtain height measurements from video

TL;DR: In this paper, a new measurement algorithm is presented which generates height measurements and their associated errors from a single known physical measurement in an image, which draws on results from projective geometry and computer vision.
Posted Content

With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations

TL;DR: Nearest-Neighbor Contrastive Learning of visual representations (NNCLR) as mentioned in this paper samples the nearest neighbors from the dataset in the latent space, and treats them as positives, which provides more semantic variations than pre-defined transformations.
Posted Content

VGGFace2: A dataset for recognising faces across pose and age.

TL;DR: VGGFace2 as mentioned in this paper is a large-scale face dataset with 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject.
Journal ArticleDOI

Automatic and Efficient Human Pose Estimation for Sign Language Videos

TL;DR: A fully automatic arm and hand tracker that detects joint positions over continuous sign language video sequences of more than an hour in length that outperforms the state-of-the-art long term tracker by Buehler et al. and achieves superior joint localisation results to those obtained using the pose estimation method of Yang and Ramanan.