scispace - formally typeset
Search or ask a question

Showing papers by "J. Stephen Downie published in 2007"


Proceedings Article
01 Jan 2007
TL;DR: The relationships that mood has with genre, artist and usage metadata are explored and a cluster-based approach is recommended that overcomes specific term-related problems by creating a relatively small set of data-derived “mood spaces” that could form the ground-truth for a proposed MIREX “Automated Mood Classification” task.
Abstract: There is a growing interest in developing and then evaluating Music Information Retrieval (MIR) systems that can provide automated access to the mood dimension of music. Mood as a music access feature, however, is not well understood in that the terms used to describe it are not standardized and their application can be highly idiosyncratic. To better understand how we might develop methods for comprehensively developing and formally evaluating useful automated mood access techniques, we explore the relationships that mood has with genre, artist and usage metadata. Statistical analyses of term interactions across three metadata collections (AllMusicGuide.com, epinions.com and Last.fm) reveal important consistencies within the genre-mood and artist-mood relationships. These consistencies lead us to recommend a cluster-based approach that overcomes specific term-related problems by creating a relatively small set of data-derived “mood spaces” that could form the ground-truth for a proposed MIREX “Automated Mood Classification” task.

125 citations


Proceedings Article
01 Jan 2007
TL;DR: A K-means clustering method is applied to create a simple yet meaningful cluster-based set of high-level mood categories as well as a ground-truth dataset for the evaluation of mood-based Music Information Retrieval (MIR) systems.
Abstract: A standardized mood classification testbed is neede d for formal cross-algorithm comparison and evaluation. I n this poster, we present a simplification of the pro blems associated with developing a ground-truth set for t he evaluation of mood-based Music Information Retrieval (MIR) systems. Using a dataset derived from Last.fm tags and the USPOP audio collection, we have applied a K-means clustering method to create a simple yet meaningful cluster-based set of high-level mood categories as well as a ground-truth dataset.

46 citations


Proceedings Article
01 Dec 2007
TL;DR: Findings of a series of analyses of human similarity judgments from the Symbolic Melodic Similarity, and Audio Music Similarity tasks from the Music Information Retrieval Evaluation Exchange (MIREX) 2006 are presented.
Abstract: This paper presents findings of a series of analyses of human similarity judgments from the Symbolic Melodic Similarity, and Audio Music Similarity tasks from the Music Information Retrieval Evaluation Exchange (MIREX) 2006. The categorical judgment data generated by the evaluators is analyzed with regard to judgment stability, inter-grader reliability, and patterns of disagreement, both within and between the two tasks. An exploration of this space yields implications for the design of MIREX-like evaluations.

38 citations



Proceedings Article
01 Dec 2007
TL;DR: New feature types have been developed to obtain a more comprehensive understanding of the kinds of information present in queries including such things as indications of uncertainty, associated use, and the “aboutness” of the underlying musical work.
Abstract: This paper presents preliminary findings based on the analyses of user-provided information features found in 566 queries seeking help in the identification of particular music works or artists. Queries were drawn from the answers.google.com (Google Answers) website. The types and frequency of occurrences of different information features are compared with the results from previous studies of music queries. New feature types have also been developed to obtain a more comprehensive understanding of the kinds of information present in queries including such things as indications of uncertainty, associated use, and the “aboutness” of the underlying musical work. The presence of erroneous information in the queries is also discussed.

9 citations


Proceedings Article
01 Dec 2007
TL;DR: The Do-It-Yourself (DIY) web service of the MIREX DIY web service represents a means by which researchers can remotely submit, execute, and evaluate their Music Information Retrieval algorithms against standardized datasets that are not otherwise freely distributable.
Abstract: The Do-It-Yourself (DIY) web service of the Music Information Retrieval Evaluation eXchange (MIREX) represents a means by which researchers can remotely submit, execute, and evaluate their Music Information Retrieval (MIR) algorithms against standardized datasets that are not otherwise freely distributable. Since its inception in 2005 at the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), MIREX has, to date, required heavy interaction by IMIRSEL team members in the execution, debugging, and validation of submitted code. The goal of the MIREX DIY web service is to put such responsibilities squarely into the hands of submitters, and also enable the evaluations of algorithms yearround, as opposed to annual exchanges.

8 citations


Proceedings ArticleDOI
18 Jun 2007
TL;DR: The influence of task definitions, as well as evaluation metrics on user perceptions of music similarity, are discussed, and recommendations for future Music Digital Library/Music Information Retrieval research pertaining to music similarity are provided.
Abstract: This paper presents an analysis of 7,602 similarity judgments collected for the Symbolic Melodic Similarity (SMS) and Audio Music Similarity and Retrieval (AMS) evaluation tasks in the 2006 Music Information Retrieval Evaluation eXchange (MIREX). We discuss the influence of task definitions, as well as evaluation metrics on user perceptions of music similarity, and provide recommendations for future Music Digital Library/Music Information Retrieval research pertaining to music similarity.

5 citations


Proceedings ArticleDOI
23 Jul 2007
TL;DR: This demonstration presents the Do-It-Yourself (DIY) web service of the Music Information retrieval Evaluation eXchange (MIREX), which provides standardized datasets and evaluation frameworks to evaluate Music Information Retrieval (MIR) systems and algorithms.
Abstract: 1. EXTENDED ABSTRACT This demonstration presents the Do-It-Yourself (DIY) web service of the Music Information Retrieval Evaluation eXchange (MIREX). As TREC does for text retrieval, MIREX provides standardized datasets and evaluation frameworks to evaluate Music Information Retrieval (MIR) systems and algorithms [1]. However, unlike TREC where participants are given the datasets and execute their code locally, MIREX data sets cannot be distributed due to copyright restrictions. In previous years, MIREX participants submitted systems to the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), where they were manually executed, and evaluated.

1 citations