scispace - formally typeset
J

Junghyun Kim

Researcher at Electronics and Telecommunications Research Institute

Publications -  56
Citations -  286

Junghyun Kim is an academic researcher from Electronics and Telecommunications Research Institute. The author has contributed to research in topics: Fingerprint (computing) & Signal. The author has an hindex of 9, co-authored 54 publications receiving 271 citations.

Papers
More filters
Proceedings Article

Music mood classification model based on arousal-valence values

TL;DR: A music mood classification model based on arousal-valence (AV) values for music recommendation system is presented and some regions can be identified by representative mood tags like previous models but some mood tags are overlapped in almost regions.
Journal ArticleDOI

Music detection from broadcast contents using convolutional neural networks with a Mel-scale kernel

TL;DR: The proposed method consistently showed better performance in all the three languages than the baseline system, and the F-score ranged from 86.5% for British data to 95.9% for Korean drama data.
Patent

Music search apparatus and method using emotion model

TL;DR: In this paper, a music search apparatus using an emotion model includes a music database (DB) for storing sound source data about a plurality of pieces of music and Arousal-Valence (AV) coefficients of the respective pieces.
Patent

Method of managing group key for secure multicast communication

TL;DR: A group key management method for secure multicast communication includes: creating a tree having a root node, internal nodes and leaf nodes to manage group keys of a receiver group by a group-key management server; generating user keys of all nodes excluding the root node in the tree on the basis of Chinese Remainder Theorem; assigning the leaf nodes of the tree to users of the receiver group; and sending the user key of the leaf node to the corresponding users as discussed by the authors.
Patent

Method and apparatus for searching for recommended music using emotional information of music

TL;DR: In this paper, the authors present a method and apparatus for searching for recommended music using the emotional information of music and, more particularly, to enable recommended music to be searched for using mixed emotions by extracting emotional values including a valence value and an arousal value from an input search condition when a predetermined search condition is input by a user.