scispace - formally typeset
D

David Dagan Feng

Researcher at University of Sydney

Publications -  442
Citations -  9808

David Dagan Feng is an academic researcher from University of Sydney. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 43, co-authored 433 publications receiving 7892 citations. Previous affiliations of David Dagan Feng include Hong Kong Polytechnic University & Information Technology University.

Papers
More filters
Proceedings ArticleDOI

Medical image classification with convolutional neural network

TL;DR: A customized Convolutional Neural Networks with shallow convolution layer to classify lung image patches with interstitial lung disease and the same architecture can be generalized to perform other medical image or texture classification tasks.
Book ChapterDOI

Fundamentals of Content-Based Image Retrieval

TL;DR: This chapter introduces in this chapter some fundamental theories for content-based image retrieval, and introduces in detail some widely used methods for visual content descriptions.
Journal ArticleDOI

An Ensemble of Fine-Tuned Convolutional Neural Networks for Medical Image Classification.

TL;DR: A new method for classifying medical images that uses an ensemble of different convolutional neural network (CNN) architectures that achieves a higher accuracy than established CNNs and is only overtaken by those methods that source additional training data.
Journal ArticleDOI

In-Shoe Plantar Pressure Measurement and Analysis System Based on Fabric Pressure Sensing Array

TL;DR: An in-shoe plantar pressure measurement and analysis system based on a textile fabric sensor array, which is soft, light, and has a high-pressure sensitivity and a long service life is presented.
Proceedings ArticleDOI

Robust saliency detection via regularized random walks ranking

TL;DR: This paper proposes a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details and proposes the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations.