scispace - formally typeset
B

Benjamin Ma

Researcher at University of Southern California

Publications -  6
Citations -  33

Benjamin Ma is an academic researcher from University of Southern California. The author has contributed to research in topics: Music information retrieval & Film genre. The author has an hindex of 3, co-authored 6 publications receiving 17 citations.

Papers
More filters
Proceedings ArticleDOI

A Multimodal View into Music's Effect on Human Neural, Physiological, and Emotional Experience

TL;DR: Music features related to dynamics, register, rhythm, and harmony were found to be particularly helpful in predicting these human reactions, and using multivariate time series models with attention mechanisms are effective in predicting emotional ratings.
Journal ArticleDOI

A computational lens into how music characterizes genre in film.

TL;DR: In this article, supervised neural network models with various pooling mechanisms were used to predict a film's genre from its soundtrack, using handcrafted music information retrieval (MIR) features against VGGish audio embedding features.
Proceedings ArticleDOI

Learning Shared Vector Representations of Lyrics and Chords in Music

TL;DR: This work represents lyrics and chords in a shared vector space using a phrase-aligned chord-and-lyrics corpus and shows that models that use these shared representations predict a listener’s emotion while hearing musical passages better than models that do not use these representations.
Proceedings ArticleDOI

Predicting Human-Reported Enjoyment Responses in Happy and Sad Music

TL;DR: A novel method to identify auditory features that best predict listener-reported enjoyment ratings by splitting the features into qualitative feature groups, then training predictive models on these feature groups and comparing prediction performance is introduced.
Proceedings ArticleDOI

Loss Function Approaches for Multi-label Music Tagging

TL;DR: In this paper, an ensemble-based convolutional neural network (CNN) model trained using various loss functions for tagging musical genres from audio is presented, and the effect of different loss functions and resampling strategies on prediction performance is investigated.