scispace - formally typeset
N

Neil Birkbeck

Researcher at Google

Publications -  101
Citations -  1638

Neil Birkbeck is an academic researcher from Google. The author has contributed to research in topics: Video quality & Computer science. The author has an hindex of 19, co-authored 88 publications receiving 1052 citations. Previous affiliations of Neil Birkbeck include Princeton University & University of Alberta.

Papers
More filters
Journal ArticleDOI

ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality Prediction

TL;DR: An objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) is devised which analyzes the statistics of spatial and temporal band-pass video coefficients and achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing V QA models.
Proceedings ArticleDOI

Rich features for perceptual quality assessment of UGC videos

TL;DR: Wang et al. as discussed by the authors proposed a DNN-based framework to thoroughly analyze the importance of content, technical quality, and compression level in perceptual quality for video quality assessment.
Journal ArticleDOI

Predicting the Quality of Compressed Videos with Pre-Existing Distortions

TL;DR: 1stepVQA overcomes limitations of Full-Reference, Reduced-Reference and No-Reference VQA models by exploiting the statistical regularities of both natural videos and distorted videos, and is able to more accurately predict the quality of compressed videos, given imperfect reference videos.
Proceedings ArticleDOI

Geometry-driven quantization for omnidirectional image coding

TL;DR: A method to adapt the quantization tables of typical block-based transform codecs when the input to the encoder is a panoramic image resulting from equirectangular projection of a spherical image and results show that a rate reduction can be achieved for the same perceptual quality of the spherical signal with respect to a standard quantization.
Book ChapterDOI

Segmentation of multiple knee bones from CT for orthopedic knee surgery planning.

TL;DR: A fully automated, highly precise, and computationally efficient segmentation approach for multiple bones that achieves simultaneous segmentation of femur, tibia, patella, and fibula with an overall accuracy of less than 1mm surface-to-surface error.