scispace - formally typeset
Open AccessJournal ArticleDOI

Feature Extraction from Speech Data for Emotion Recognition

Semiye Demircan, +1 more
- 01 Jan 2014 - 
- Vol. 2, Iss: 1, pp 28-30
Reads0
Chats0
TLDR
Pre-processing necessary for emotion recognition from speech data is performed and Mel Frequency Cepstral Coefficients (MFCC) from the signals are extracted and classified with k-NN algorithm.
Abstract
In recent years the workings which requires human-machine interaction such as speech recognition, emotion recognition from speech recognition is increasing. Not only the speech recognition also the features during the conversation is studied like melody, emotion, pitch, emphasis. It has been proven with the research that it can be reached meaningful results using prosodic features of speech. In this paper we performed pre-processing necessary for emotion recognition from speech data. We extract features from speech signal. To recognize emotion it has been extracted Mel Frequency Cepstral Coefficients (MFCC) from the signals. And we classified with k-NN algorithm. 

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Machine learning

TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Journal ArticleDOI

Speech emotion recognition using deep 1D & 2D CNN LSTM networks

TL;DR: The experimental results show that the designed networks achieve excellent performance on the task of recognizing speech emotion, especially the 2D CNN LSTM network outperforms the traditional approaches, Deep Belief Network (DBN) and CNN on the selected databases.
Journal ArticleDOI

Speech Emotion Recognition Using Deep Learning Techniques: A Review

TL;DR: An overview of Deep Learning techniques is presented and some recent literature where these methods are utilized for speech-based emotion recognition is discussed, including databases used, emotions extracted, contributions made toward speech emotion recognition and limitations related to it.
Proceedings ArticleDOI

Speech based human emotion recognition using MFCC

TL;DR: Mel Frequency Cepstral Coefficient (MFCC) technique is used to recognize emotion of a speaker from their voice and the efficiency was found to be about 80%.
Journal ArticleDOI

A Novel Approach for Classification of Speech Emotions Based on Deep and Acoustic Features

TL;DR: In this article, a hybrid architecture based on acoustic and deep features was proposed to increase the classification accuracy in the problem of speech emotion recognition, which consists of feature extraction, feature selection and classification stages.
References
More filters
Journal ArticleDOI

Machine learning

TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Journal ArticleDOI

Emotional speech recognition: Resources, features, and methods

TL;DR: This paper overviews emotional speech recognition having in mind three goals to provide an up-to-date record of the available emotional speech data collections, and examines separately classification techniques that exploit timing information from which that ignore it.
Journal ArticleDOI

Speech emotion recognition using hidden Markov models

TL;DR: This paper proposes a text independent method of emotion classification of speech that makes use of short time log frequency power coefficients (LFPC) to represent the speech signals and a discrete hidden Markov model (HMM) as the classifier.
Proceedings Article

Speech emotion recognition using hidden Markov models

TL;DR: This paper introduces a first approach to emotion recognition using RAMSES, the UPC’s speech recognition system, based on standard speech recognition technology using hidden semi-continuous Markov models.
Book

Speech Recognition: Theory and C++ Implementation

TL;DR: HMM Training.
Related Papers (5)