scispace - formally typeset
H

Haibo Mi

Researcher at National University of Defense Technology

Publications -  9
Citations -  255

Haibo Mi is an academic researcher from National University of Defense Technology. The author has contributed to research in topics: Deep learning & Feature extraction. The author has an hindex of 5, co-authored 9 publications receiving 141 citations.

Papers
More filters
Journal ArticleDOI

Deep Convolutional Neural Network-Based Early Automated Detection of Diabetic Retinopathy Using Fundus Image.

TL;DR: This paper explored the use of deep convolutional neural network methodology for the automatic classification of diabetic retinopathy using color fundus image, and obtained an accuracy of 94.5% on the authors' dataset, outperforming the results obtained by using classical approaches.
Journal ArticleDOI

An Adversarial Feature Distillation Method for Audio Classification

TL;DR: A distillation method is proposed which transfers knowledge from well-trained networks to a small network, and the method can compress model size while improving audio classification precision and demonstrate that the small network can provide better performance.
Journal ArticleDOI

General audio tagging with ensembling convolutional neural networks and statistical features.

TL;DR: In this article, an ensemble learning framework is applied to ensemble statistical features and the outputs from the deep classifiers, with the goal to utilize complementary information, and a sample re-weight strategy is employed to address the noisy label problem.
Proceedings ArticleDOI

Denoising Convolutional Autoencoder Based B-mode Ultrasound Tongue Image Feature Extraction

TL;DR: By quantitative comparison between different unsupervised feature extraction approaches, the denoising convolutional autoencoder (DCAE)-based method outperforms the other feature extraction methods on the reconstruction task and the 2010 silent speech interface challenge.
Journal ArticleDOI

Multistructure-Based Collaborative Online Distillation

TL;DR: A cross-architecture online-distillation approach that uses the ensemble method to aggregate networks of different structures, thus forming better teachers than traditional distillation methods and achieves strong network-performance improvement.