scispace - formally typeset
Search or ask a question
Author

Ivan Turkalj

Bio: Ivan Turkalj is an academic researcher. The author has contributed to research in topics: Support vector machine & Timbre. The author has an hindex of 1, co-authored 1 publications receiving 3 citations.

Papers
More filters
Proceedings Article
01 Jan 2012
TL;DR: It turns out, that a selection of 505 features out of the full feature set of 1155 elements does only reduce the recognition rate of a linear SVM from 82% to 78% and with the use of a polynomial instead of alinear kernel the recognition rates with the reduced feature set can even be increased to 84%.
Abstract: A series of experiments on the automatic classification of classical guitar sounds with support vector machines has been carried out to investigate the relevance of the features and to minimise the feature set for successful classification. Features used for classification were the time series of the partial tone amplitudes, and of the MFCCs, and the energy distribution of the nontonal percussive sound that is produced in the attack phase of the tone. Furthermore the influence of sound parameters as timbre, player, fret position and string number on the recognition rate is investigated. Finally, several nonlinear kernels are compared in their classification performance. It turns out, that a selection of 505 features out of the full feature set of 1155 elements does only reduce the recognition rate of a linear SVM from 82% to 78%. With the use of a polynomial instead of a linear kernel the recognition rate with the reduced feature set can even be increased to 84%.

3 citations


Cited by
More filters
Journal ArticleDOI
09 Jul 2019
TL;DR: A model called technique-embedded note tracking (TENT) that uses the result of playing technique detection to inform note event estimation and can nicely recognize complicated skills in monophonic guitar solos and improve the F-score of note event estimating by 14.7% compared to an existing method.
Abstract: The employment of playing techniques such as string bend and vibrato in electric guitar performance makes it difficult to transcribe the note events using general note tracking methods. These methods analyze the contour of fundamental frequency computed from a given audio signal, but they do not consider the variation in the contour caused by the playing techniques. To address this issue, we present a model called technique-embedded note tracking (TENT) that uses the result of playing technique detection to inform note event estimation. We evaluate the proposed model on a dataset of 42 unaccompanied lead guitar phrases. Our experiments showed that TENT can nicely recognize complicated skills in monophonic guitar solos and improve the F-score of note event estimation by 14.7% compared to an existing method. For reproducibility, we share the Python source code of our implementation of TENT at the following GitHub repo: https://github.com/srviest/SoloLa .

19 citations

Book ChapterDOI
15 Oct 2013
TL;DR: This paper presents an accurate and robust playing mode classifier for guitar audio signals that distinguishes between three modes routinely used in jazz improvisation: bass, solo melodic improvisation, and chords.
Abstract: When they improvise, musicians typically alternate between several playing modes on their instruments. Guitarists in particular, alternate between modes such as octave playing, mixed chords and bass, chord comping, solo melodies, walking bass, etc. Robust musical interactive systems call for a precise detection of these playing modes in real-time. In this context, the accuracy of mode classification is critical because it underlies the design of the whole interaction taking place. In this paper, we present an accurate and robust playing mode classifier for guitar audio signals. Our classifier distinguishes between three modes routinely used in jazz improvisation: bass, solo melodic improvisation, and chords. Our method uses a supervised classification technique applied to a large corpus of training data, recorded with different guitars (electric, jazz, nylon-strings, electro-acoustic). We detail our method and experimental results over various data sets. We show in particular that the performance of our classifier is comparable to that of a MIDI-based classifier. We describe the application of the classifier to live interactive musical systems and discuss the limitations and possible extensions of this approach.

7 citations

Proceedings ArticleDOI
28 Jul 2015
TL;DR: In this paper, the authors examined the change in pulse wave and SpO 2 when listening to music and found that there are features indicating a high correlation between the two signals, and they used the three classical music songs that are believed to cause feelings of fear, happiness, sadness.
Abstract: This paper presents the result of examining the change in pulse wave and SpO 2 when listening to music. The purpose of this study is a fundamental study, in order to confirm whether can be performed emotion estimated by the pulse wave and SpO 2 . Using the three classical music songs that are believed to cause feelings of fear, happiness, sadness, was to get the pulse wave and SpO 2 from 10 subjects. The results of the analysis, we have found that there are features indicating a high correlation.

1 citations