scispace - formally typeset
W

Wenqiang Du

Researcher at Northeastern University (China)

Publications -  6
Citations -  21

Wenqiang Du is an academic researcher from Northeastern University (China). The author has contributed to research in topics: Computer science & Engineering. The author has an hindex of 1, co-authored 1 publications receiving 1 citations.

Papers
More filters
Journal ArticleDOI

Research on the influence of heat treatment on residual stress of TC4 alloy produced by laser additive manufacturing based on laser ultrasonic technique

TL;DR: In this article, the residual stress in TC4 deposited specimen is evaluated using the surface wave generated by the laser, and the results show that residual stress increases with the increase of cooling rate and solution temperature.

CNSRC 2022 Evaluation Plan

TL;DR: Dong et al. as mentioned in this paper , Center for Speech and Language Technologies, Tsinghua University, China QINGYANG HONG, School of Informatics, Xiamen University and LANTIAN LI, Center for speech and language technologies, Tianqi University, Taiwan.
Proceedings ArticleDOI

C-P Map: A Novel Evaluation Toolkit for Speaker Verification

TL;DR: Experiments conducted on representative ASV systems show that the proposed C-P map offers a powerful evaluation toolkit for ASV performance analysis and comparison.
Journal ArticleDOI

Automatic Speech Recognition for Uyghur, Kazakh, and Kyrgyz: An Overview

TL;DR: The authors presented an overview of speech recognition techniques developed for Uyghur, Kazakh, and Kyrgyz, with the purposes of highlighting the techniques that are specifically effective for each language and generally effective for all of them and discovering the important factors in promoting the speech recognition research of low-resource languages.
Journal ArticleDOI

M2ASR-KIRGHIZ: A Free Kirghiz Speech Database and Accompanied Baselines

TL;DR: The M2ASR project as discussed by the authors published a free Kirghiz speech database accompanied by associated language resources, which includes 128 h of speech data from 163 speakers and corresponding transcriptions.