scispace - formally typeset
Search or ask a question
Author

Neil Birkbeck

Bio: Neil Birkbeck is an academic researcher from Google. The author has contributed to research in topics: Video quality & Computer science. The author has an hindex of 19, co-authored 88 publications receiving 1052 citations. Previous affiliations of Neil Birkbeck include Princeton University & University of Alberta.


Papers
More filters
Proceedings ArticleDOI
26 Dec 2007
TL;DR: A variational brain tumor segmentation algorithm is proposed that extends current approaches from texture segmentation by using a high dimensional feature set calculated from MRI data and registered atlases and shows that using a conditional model to discriminate between normal and abnormal regions significantly improves the segmentation results compared to traditional generative models.
Abstract: Tumor segmentation from MRI data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue, among different patients and, in many cases, similarity between tumor and normal tissue. One other challenge is how to make use of prior information about the appearance of normal brain. In this paper we propose a variational brain tumor segmentation algorithm that extends current approaches from texture segmentation by using a high dimensional feature set calculated from MRI data and registered atlases. Using manually segmented data we learn a statistical model for tumor and normal tissue. We show that using a conditional model to discriminate between normal and abnormal regions significantly improves the segmentation results compared to traditional generative models. Validation is performed by testing the method on several cancer patient MRI scans.

132 citations

Journal ArticleDOI
TL;DR: This work conducts a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and objective V QA model design.
Abstract: Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. Accordingly, there is a great need for accurate video quality assessment (VQA) models for UGC/consumer videos to monitor, control, and optimize this vast content. Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of UGC content are unpredictable, complicated, and often commingled. Here we contribute to advancing the UGC-VQA problem by conducting a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and VQA model design. By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models to create a new fusion-based BVQA model, which we dub the \textbf{VID}eo quality \textbf{EVAL}uator (VIDEVAL), that effectively balances the trade-off between VQA performance and efficiency. Our experimental results show that VIDEVAL achieves state-of-the-art performance at considerably lower computational cost than other leading models. Our study protocol also defines a reliable benchmark for the UGC-VQA problem, which we believe will facilitate further research on deep learning-based VQA modeling, as well as perceptually-optimized efficient UGC video processing, transcoding, and streaming. To promote reproducible research and public evaluation, an implementation of VIDEVAL has been made available online: \url{this https URL}.

113 citations

Journal ArticleDOI
01 Jan 2021
TL;DR: In this paper, the Rapid and Accurate Video Quality Evaluator (RAPIQUE) model is proposed for video quality prediction, which combines and leverages the advantages of both quality-aware scene statistics features and semantics-aware deep convolutional features.
Abstract: Blind or no-reference video quality assessment of user-generated content (UGC) has become a trending, challenging, heretofore unsolved problem. Accurate and efficient video quality predictors suitable for this content are thus in great demand to achieve more intelligent analysis and processing of UGC videos. Previous studies have shown that natural scene statistics and deep learning features are both sufficient to capture spatial distortions, which contribute to a significant aspect of UGC video quality issues. However, these models are either incapable or inefficient for predicting the quality of complex and diverse UGC videos in practical applications. Here we introduce an effective and efficient video quality model for UGC content, which we dub the Rapid and Accurate Video Quality Evaluator (RAPIQUE), which we show performs comparably to state-of-the-art (SOTA) models but with orders-of-magnitude faster runtime. RAPIQUE combines and leverages the advantages of both quality-aware scene statistics features and semantics-aware deep convolutional features, allowing us to design the first general and efficient spatial and temporal (space-time) bandpass statistics model for video quality modeling. Our experimental results on recent large-scale UGC video quality databases show that RAPIQUE delivers top performances on all the datasets at a considerably lower computational expense. We hope this work promotes and inspires further efforts towards practical modeling of video quality problems for potential real-time and low-latency applications.

100 citations

Journal ArticleDOI
TL;DR: In this article, the VIDeo quality EVALuator (VIDEVAL) is proposed to improve the performance of VQA models for UGC/consumer videos.
Abstract: Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. Accordingly, there is a great need for accurate video quality assessment (VQA) models for UGC/consumer videos to monitor, control, and optimize this vast content. Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of UGC videos are unpredictable, complicated, and often commingled. Here we contribute to advancing the UGC-VQA problem by conducting a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and objective VQA model design. By employing a feature selection strategy on top of efficient BVQA models, we are able to extract 60 out of 763 statistical features used in existing methods to create a new fusion-based model, which we dub the VIDeo quality EVALuator (VIDEVAL), that effectively balances the trade-off between VQA performance and efficiency. Our experimental results show that VIDEVAL achieves state-of-the-art performance at considerably lower computational cost than other leading models. Our study protocol also defines a reliable benchmark for the UGC-VQA problem, which we believe will facilitate further research on deep learning-based VQA modeling, as well as perceptually-optimized efficient UGC video processing, transcoding, and streaming. To promote reproducible research and public evaluation, an implementation of VIDEVAL has been made available online: https://github.com/vztu/VIDEVAL .

74 citations

Book ChapterDOI
28 Jun 2013
TL;DR: A novel framework for rapid and accurate segmentation of a cohort of organs that integrates local and global image context through a product rule to simultaneously detect multiple landmarks on the target organs and exploits sparsity in the global context for efficient detection.
Abstract: We propose a novel framework for rapid and accurate segmentation of a cohort of organs. First, it integrates local and global image context through a product rule to simultaneously detect multiple landmarks on the target organs. The global posterior integrates evidence over all volume patches, while the local image context is modeled with a local discriminative classifier. Through non-parametric modeling of the global posterior, it exploits sparsity in the global context for efficient detection. The complete surface of the target organs is then inferred by robust alignment of a shape model to the resulting landmarks and finally deformed using discriminative boundary detectors. Using our approach, we demonstrate efficient detection and accurate segmentation of liver, kidneys, heart, and lungs in challenging low-resolution MR data in less than one second, and of prostate, bladder, rectum, and femoral heads in CT scans, in roughly one to three seconds and in both cases with accuracy fairly close to inter-user variability.

72 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations

Journal ArticleDOI
TL;DR: A fast and accurate fully automatic method for brain tumor segmentation which is competitive both in terms of accuracy and speed compared to the state of the art, and introduces a novel cascaded architecture that allows the system to more accurately model local label dependencies.

2,538 citations

Journal ArticleDOI
TL;DR: The state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas is reviewed, giving special attention to recent developments in radiological tumor assessment guidelines.
Abstract: MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

765 citations

Journal ArticleDOI
TL;DR: A novel brain tumor segmentation method developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency could segment brain images slice‐by‐slice, much faster than those based on image patches.

611 citations

Journal ArticleDOI
TL;DR: The major issues regarding this multi-step process, focussing in particular on challenges of the extraction of radiomic features from data sets provided by computed tomography, positron emission tomographic, and magnetic resonance imaging are summarised.
Abstract: Radiomics is an emerging translational field of research aiming to extract mineable high-dimensional data from clinical images. The radiomic process can be divided into distinct steps with definable inputs and outputs, such as image acquisition and reconstruction, image segmentation, features extraction and qualification, analysis, and model building. Each step needs careful evaluation for the construction of robust and reliable models to be transferred into clinical practice for the purposes of prognosis, non-invasive disease tracking, and evaluation of disease response to treatment. After the definition of texture parameters (shape features; first-, second-, and higher-order features), we briefly discuss the origin of the term radiomics and the methods for selecting the parameters useful for a radiomic approach, including cluster analysis, principal component analysis, random forest, neural network, linear/logistic regression, and other. Reproducibility and clinical value of parameters should be firstly tested with internal cross-validation and then validated on independent external cohorts. This article summarises the major issues regarding this multi-step process, focussing in particular on challenges of the extraction of radiomic features from data sets provided by computed tomography, positron emission tomography, and magnetic resonance imaging.

579 citations