scispace - formally typeset
G

Gregor Urban

Researcher at University of California, Irvine

Publications -  29
Citations -  2110

Gregor Urban is an academic researcher from University of California, Irvine. The author has contributed to research in topics: Deep learning & Computer science. The author has an hindex of 13, co-authored 27 publications receiving 1620 citations. Previous affiliations of Gregor Urban include University Hospital Heidelberg & University of California, Berkeley.

Papers
More filters
Journal ArticleDOI

Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy.

TL;DR: The ability of computer-assisted image analysis using convolutional neural networks (CNNs; a deep learning model for image analysis) to improve polyp detection, a surrogate of ADR, is tested and could increase the ADR and decrease interval colorectal cancers but requires validation in large multicenter trials.
Journal ArticleDOI

Deep MRI brain extraction: A 3D convolutional neural network for skull stripping

TL;DR: A 3D convolutional deep learning architecture to address shortcomings of existing methods, not limited to non-enhanced T1w images, and may prove useful for large-scale studies and clinical trials.
Journal ArticleDOI

Jet flavor classification in high-energy physics with deep neural networks

TL;DR: This work finds that the highest-level lowest-dimensionality expert information sacrifices information needed for classification, that the performance of current state-of-the-art taggers can be matched or slightly exceeded by deep-network-based taggers using only track and vertex information, and that classification using only lowest-level highest- dimensionality tracking information remains a difficult task for deep networks.
Proceedings Article

Do Deep Convolutional Nets Really Need to be Deep and Convolutional

TL;DR: This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and Convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained.