Opensmile: the munich versatile and fast open-source audio feature extractor
Citations
1,186 citations
Cites background from "Opensmile: the munich versatile and..."
...Even though openSMILE originates from the audio processing domain as such, it has been featured in the 2010 ACM MM Open Source Software Competition [4] it has recently been extended with basic video features, and, more importantly, its design is principally modality independent....
[...]
...in the 2010 ACM MM Open Source Software Competition [4] – it has recently been extended with basic video features, and, more importantly, its design is principally modality independent....
[...]
...Details on the ring-buffer architecture can be found in [4] and on the project webpage....
[...]
694 citations
Cites methods from "Opensmile: the munich versatile and..."
...Again, we use TUM’s open-source openSMILE feature extractor [27] and provide extracted feature sets on a per-chunk level (except for SVC)....
[...]
671 citations
Cites background or methods from "Opensmile: the munich versatile and..."
...Future studies will very likely address feature importance across databases (Eyben et al., 2010a) and further types of efficient feature selection (Rong et al, 2007; Altun and Polata, 2009)....
[...]
...In the Classifier Sub-Challenge, participants designed their own classifiers and had to use a selection of 384 standard acoustic features, computed with the open SMILE toolkit (Eyben et al., 2009, 2010c) provided by the organisers....
[...]
...The Munich open-source Emotion and Affect Recognition Toolkit (openEAR) [65] is the first of its kind to provide a free open source toolkit that integrates all three necessary components: feature extraction (by the fast openSMILE backend [66]), classifiers, and pre-trained models....
[...]
...A severe issue in cross/multi-corpora studies is the inhomogeneous labelling process, which often leads to inconsistent, incompatible or even distinct emotional classes (Eyben et al., 2010a)....
[...]
...In the Classifier Sub-Challenge, participants designed their own classifiers and had to use a selection of 384 standard acoustic features, computed with the openSMILE toolkit [65, 66] provided by the organisers....
[...]
630 citations
570 citations
Cites background or methods from "Opensmile: the munich versatile and..."
...(Metallinou et al., 2008) and (Eyben et al., 2010a) fused audio and textual modalities for emotion recognition....
[...]
...To compute the features, we use openSMILE (Eyben et al., 2010b), an open-source software that automatically extracts audio features such as pitch and voice intensity....
[...]
References
20,196 citations
[...]
9,995 citations
1,009 citations
"Opensmile: the munich versatile and..." refers methods in this paper
...Related feature extraction tools used for speech research include e.g. the Hidden Markov Model Toolkit (HTK ) [15], the PRAAT Software [3], the Speech Filing System3 (SFS), the Auditory Toolbox4, a MatlabTM toolbox5 by Raul Fernandez [6], the Tracter framework [7], and the SNACK 6 package for the Tcl scripting language....
[...]
...the Hidden Markov Model Toolkit (HTK ) [15], the PRAAT Software [3], the Speech Filing System(3) (SFS), the Auditory Toolbox(4), a Matlab toolbox(5) by Raul Fernandez [6], the Tracter framework [7], and the SNACK 6 package for the Tcl scripting language....
[...]