scispace - formally typeset
S

Shlomo Dubnov

Researcher at University of California, San Diego

Publications -  152
Citations -  3183

Shlomo Dubnov is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Improvisation & Computer science. The author has an hindex of 29, co-authored 140 publications receiving 2797 citations. Previous affiliations of Shlomo Dubnov include Information Technology University & IRCAM.

Papers
More filters
Journal ArticleDOI

Automatic Classification of Musical Instrument Sounds

TL;DR: An exhaustive review of research on automatic classification of sounds from musical instruments presents and discusses different techniques for similarity-based clustering of sounds and for classification into pre-defined instrumental categories.

Instrumental Gestural Mapping Strategies as Expressivity Determinants in Computer Music Performance

TL;DR: This paper presents ongoing work on gesture mapping strategies and applications to sound synthesis by signal models controlled via a standard MIDI wind controller, considering different mapping strategies in order to achieve "fine" control of additive synthesis by coupling originally independent outputs from the wind controller.
Journal ArticleDOI

Using Factor Oracles for Machine Improvisation

TL;DR: The factor oracle, a data structure proposed by Crochemore & al for string matching, is presented and the relation between this structure and the previous models is shown and how it can be adapted for learning musical sequences and generating improvisations in a real-time context.
Journal ArticleDOI

Using machine-learning methods for musical style modeling

TL;DR: This research seeks to capture some of the regularity apparent in the composition process by using statistical and information theoretic tools to analyze musical pieces and generate new works that imitate the style of the great masters.
Proceedings ArticleDOI

OMax brothers: a dynamic yopology of agents for improvization learning

TL;DR: A multi-agent architecture for an improvization oriented musician-machine interaction system that learns in real time from human performers that is capable of processing real-time audio/video as well as MIDI is described.