scispace - formally typeset
T

Timothy Greer

Researcher at University of Southern California

Publications -  14
Citations -  69

Timothy Greer is an academic researcher from University of Southern California. The author has contributed to research in topics: Music information retrieval & Computer science. The author has an hindex of 4, co-authored 14 publications receiving 47 citations.

Papers
More filters
Proceedings ArticleDOI

Sounds of the Human Vocal Tract.

TL;DR: Evidence is provided showing that beatboxers use non-linguistic articulations and airstream mechanisms to produce many sound effects that have not been attested in any language.
Proceedings ArticleDOI

A Multimodal View into Music's Effect on Human Neural, Physiological, and Emotional Experience

TL;DR: Music features related to dynamics, register, rhythm, and harmony were found to be particularly helpful in predicting these human reactions, and using multivariate time series models with attention mechanisms are effective in predicting emotional ratings.
Proceedings ArticleDOI

Comparison of Basic Beatboxing Articulations Between Expert and Novice Artists Using Real-Time Magnetic Resonance Imaging.

TL;DR: Analysis of three common beatboxing sounds resulted in the finding that advanced beatboxers produce stronger ejectives and have greater control over different airstreams than novice beatboxer, to enhance the quality of their sounds.
Journal ArticleDOI

A computational lens into how music characterizes genre in film.

TL;DR: In this article, supervised neural network models with various pooling mechanisms were used to predict a film's genre from its soundtrack, using handcrafted music information retrieval (MIR) features against VGGish audio embedding features.
Proceedings ArticleDOI

Learning Shared Vector Representations of Lyrics and Chords in Music

TL;DR: This work represents lyrics and chords in a shared vector space using a phrase-aligned chord-and-lyrics corpus and shows that models that use these shared representations predict a listener’s emotion while hearing musical passages better than models that do not use these representations.