scispace - formally typeset
Search or ask a question
Author

J. Alison Noble

Bio: J. Alison Noble is an academic researcher from University of Oxford. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 42, co-authored 291 publications receiving 7033 citations. Previous affiliations of J. Alison Noble include Washington and Lee University & University College London.


Papers
More filters
Journal ArticleDOI
TL;DR: A new state-of-the-art performance for cell count on standard synthetic image benchmarks is set and it is shown that the FCRNs trained entirely with synthetic data can generalise well to real microscopy images both for cell counting and detections for the case of overlapping cells.
Abstract: This paper concerns automated cell counting and detection in microscopy images. The approach we take is to use convolutional neural networks (CNNs) to regress a cell spatial density map across the ...

395 citations

Journal ArticleDOI
TL;DR: The proposed end‐to‐end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference.

350 citations

Journal ArticleDOI

280 citations

Book ChapterDOI
01 Oct 2012
TL;DR: A machine learning-based cell detection method applicable to different modalities and state-of-the-art cell detection accuracy is achieved for H&E stained histology, fluorescence, and phase-contrast images.
Abstract: Cell detection in microscopy images is an important step in the automation of cell based-experiments. We propose a machine learning-based cell detection method applicable to different modalities. The method consists of three steps: first, a set of candidate cell-like regions is identified. Then, each candidate region is evaluated using a statistical model of the cell appearance. Finally, dynamic programming picks a set of non-overlapping regions that match the model. The cell model requires few images with simple dot annotation for training and can be learned within a structured SVM framework. In the reported experiments, state-of-the-art cell detection accuracy is achieved for H&E-stained histology, fluorescence, and phase-contrast images.

195 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.

8,730 citations

Journal ArticleDOI
TL;DR: A review of recent as well as classic image registration methods to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.

6,842 citations

Journal ArticleDOI
TL;DR: A look at progress in the field over the last 20 years is looked at and some of the challenges that remain for the years to come are suggested.
Abstract: The analysis of medical images has been woven into the fabric of the pattern analysis and machine intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come.

4,249 citations

Book ChapterDOI
07 May 2006
TL;DR: It is shown that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time.
Abstract: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations [1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion.

3,828 citations