scispace - formally typeset
X

Xiaofeng Qi

Researcher at Sichuan University

Publications -  9
Citations -  278

Xiaofeng Qi is an academic researcher from Sichuan University. The author has contributed to research in topics: Computer science & Pattern recognition (psychology). The author has an hindex of 4, co-authored 6 publications receiving 106 citations.

Papers
More filters
Journal ArticleDOI

Automated diagnosis of breast ultrasonography images using deep neural networks.

TL;DR: An automated breast cancer diagnosis model for ultrasonography images using deep convolutional neural networks with multi‐scale kernels and skip connections is developed and achieves a performance comparable to human sonographers and can be applied to clinical scenarios.
Journal ArticleDOI

Automatic diagnosis for thyroid nodules in ultrasound images by deep neural networks

TL;DR: An attention-based feature aggregation network is proposed to automatically integrate the features extracted from multiple images in one examination, utilizing different views of the nodules to improve the performance of recognizing malignant nodules in the ultrasound images.
Journal ArticleDOI

Automated diagnosis of multi-plane breast ultrasonography images using deep neural networks

TL;DR: This paper formulate the diagnosis of breast cancer on ultrasonography images as a Multiple Instance Learning (MIL) problem, diagnosing a breast nodule by jointly analyzing the nodule on multiple planes and develops an attention-augmented deep neural network to solve this problem.
Journal ArticleDOI

Deep Attention-Based Imbalanced Image Classification.

TL;DR: Zhang et al. as mentioned in this paper proposed an attention-based imbalanced image classification (DAIIC) approach to automatically pay more attention to the minority classes in a data-driven manner, where an attention network and a novel attention augmented logistic regression function are employed to encapsulate as many features, which belongs to minority classes, as possible into the discriminative feature learning process.
Proceedings ArticleDOI

Interactive Audio-text Representation for Automated Audio Captioning with Contrastive Learning

TL;DR: This work proposes a novel AAC system called CLIP-AAC to learn interactive cross-modality representation with both acoustic and textual information and indicates that both the pre-trained model and contrastive learning contribute to the performance gain of the AAC model.