scispace - formally typeset
Search or ask a question
Topic

Classifier (UML)

About: Classifier (UML) is a research topic. Over the lifetime, 20181 publications have been published within this topic receiving 385589 citations.


Papers
More filters
Proceedings ArticleDOI
18 Jul 2022
TL;DR: In this article , a target domain adaptation speech synthesis network (TDASS) is proposed to synthesize personalized speech by text-to-speech (TTS) application, which introduces a self-interested classifier for reducing the non-target influence.
Abstract: Recently, synthesizing personalized speech by text-to-speech (TTS) application is highly demanded. But the previous TTS models require a mass of target speaker speeches for training. It is a high-cost task, and hard to record lots of utterances from the target speaker. Data augmentation of the speeches is a solution but leads to the low-quality synthesis speech problem. Some multi-speaker TTS models are proposed to address the issue. But the quantity of utterances of each speaker imbalance leads to the voice similarity problem. We propose the Target Domain Adaptation Speech Synthesis Network (TDASS) to address these issues. Based on the backbone of the Tacotron2 model, which is the high-quality TTS model, TDASS introduces a self-interested classifier for reducing the non-target influence. Besides, a special gradient reversal layer with different operations for target and non-target is added to the classifier. We evaluate the model on a Chinese speech corpus, the experiments show the proposed method outperforms the baseline method in terms of voice quality and voice similarity.

6 citations

Journal ArticleDOI
TL;DR: In this article , a board-level diagnosis workflow that utilizes domain adaptation (DA) to transfer the knowledge learned from mature boards to a new board in the ramp-up phase is proposed.
Abstract: High integration densities and design complexity make board-level functional fault diagnosis extremely difficult. Machine-learning techniques can identify functional faults with high accuracy, but they require a large volume of data to achieve high-prediction accuracy. This drawback limits the effectiveness of traditional machine-learning algorithms for training a model in the early stage of manufacturing, when only a limited amount of fail data and repair records are available. We propose a board-level diagnosis workflow that utilizes domain adaptation (DA) to transfer the knowledge learned from mature boards to a new board in the ramp-up phase. First, based on the requirement of fault diagnosis, we select an appropriate domain-adaptation method to reduce differences between mature boards and the new board. Second, these DA methods utilize information from both the mature and the new boards with carefully designed domain-alignment rules and train a functional fault diagnosis classifier. Experimental results using three complex boards in volume production and one new board in the ramp-up phase show that, with the help of DA and the proposed workflow, the diagnosis accuracy is improved.

6 citations

Journal ArticleDOI
TL;DR: In this article , a maximum likelihood (ML) classifier was proposed for determining the type of space-frequency block coding (SFBC) for orthogonal frequency division multiplexing (OFDM) transmissions.
Abstract: The development of intelligent radios in wireless applications is mainly driven by the growing need for higher data rates, along with constrained spectrum resources. An intelligent radio is one that can autonomously assess the communication environment and automatically update the communication parameters to achieve optimal performance. The problem of determining the type of space-frequency block coding (SFBC) for orthogonal frequency division multiplexing (OFDM) transmissions is one of the main tasks of an intelligent receiver. Previous approaches to this problem are restricted to uncoded communications; nevertheless, existing systems typically utilize error-correcting codes. This study develops a maximum-likelihood (ML) classifier that discriminates among SFBC-OFDM signals using the soft outputs of a channel decoder. The mathematical analysis shows that the maximization of the likelihood function can be carried out by employing an iterative expectation-maximization (EM) procedure. A channel estimator is also included in the proposed classifier as a vital step. The findings show that the classification performance of the proposed algorithm is considerably better than the classical classifiers reported in the literature, at the cost of an acceptable increase in computing complexity.

6 citations

Journal ArticleDOI
TL;DR: In this article , the authors presented four different Arabic spam reviews detection methods, while putting more focus towards the construction and evaluation of an ensemble approach, which is based on integrating a rule-based classifier with machine learning techniques, while utilizing content-based features that depend on N-gram features and Negation handling.

6 citations

Book ChapterDOI
05 Oct 2011
TL;DR: The results imply new sample-size lower bounds for the common agnostic PAC model - a lower bound of Ω(1/e2) on the sample complexity of learning deterministic classifiers, as well as novel results about the utility of unlabeled examples in a semi-supervised learning setup.
Abstract: We introduce a new model of learning, Known-Labeling-Classifier-Learning (KLCL). The goal of such learning is to find a low-error classifier from some given target-class of predictors, when the correct labeling is known to the learner. This learning problem can be viewed as measuring the information conveyed by the identity of input examples, rather than by their labels. Given some class of predictors H, a labeling function, and an i.i.d. unlabeled sample generated by some unknown data distribution, the goal of our learner is to find a classifier in H that has as low as possible error with respect to the sample-generating distribution and the given labeling function. When the labeling function does not belong to the target class, the error of members of the class (and thus their relative quality as label predictors) varies with the marginal of the underlying data distribution. We prove a trichotomy with respect to the KLCL sample complexity. Namely, we show that for any learnable concept class H, its KLCL sample complexity is either 0 or Θ(1/e) or Ω(1/e2). Furthermore, we give a simple combinatorial property of concept classes that characterizes this trichotomy. Our results imply new sample-size lower bounds for the common agnostic PAC model - a lower bound of Ω(1/e2) on the sample complexity of learning deterministic classifiers, as well as novel results about the utility of unlabeled examples in a semi-supervised learning setup.

6 citations


Network Information
Related Topics (5)
Fuzzy classification
27.3K papers, 849.9K citations
72% related
Feature vector
48.8K papers, 954.4K citations
72% related
Natural language
31.1K papers, 806.8K citations
72% related
Fuzzy set
44.4K papers, 1.1M citations
71% related
Feature selection
41.4K papers, 1M citations
71% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20244
20234,888
202210,778
20211,251
20201,369
20191,476