scispace - formally typeset
B

Bram van Ginneken

Researcher at Radboud University Nijmegen

Publications -  441
Citations -  42580

Bram van Ginneken is an academic researcher from Radboud University Nijmegen. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 79, co-authored 412 publications receiving 31252 citations. Previous affiliations of Bram van Ginneken include University of Groningen & Utrecht University.

Papers
More filters
Journal ArticleDOI

A survey on deep learning in medical image analysis

TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.
Journal ArticleDOI

Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer.

Babak Ehteshami Bejnordi, +73 more
- 12 Dec 2017 - 
TL;DR: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints.
Journal ArticleDOI

Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique

TL;DR: The papers in this special section focus on the technology and applications supported by deep learning, which have proven to be powerful tools for a broad range of computer vision tasks.
Journal ArticleDOI

Reflectance and texture of real-world surfaces

TL;DR: A new texture representation called the BTF (bidirectional texture function) which captures the variation in texture with illumination and viewing direction is discussed, and a BTF database with image textures from over 60 different samples, each observed with over 200 different combinations of viewing and illumination directions is presented.