K
Kyu Woong Hwang
Researcher at Qualcomm
Publications - 59
Citations - 932
Kyu Woong Hwang is an academic researcher from Qualcomm. The author has contributed to research in topics: Context (language use) & Audio mining. The author has an hindex of 18, co-authored 59 publications receiving 879 citations. Previous affiliations of Kyu Woong Hwang include Electronics and Telecommunications Research Institute & KAIST.
Papers
More filters
Patent
Emergency handling system using informative alarm sound
TL;DR: In this article, a method includes receiving an alarm sound including information related to an emergency event, and then transmitting, to a server, identification information of the mobile device and the information.
Patent
Augmented reality with sound and geometric analysis
TL;DR: In this paper, a method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed, where the mobile device detects a target and receives the external sound from a sound source.
Patent
Method and apparatus for controlling access to applications
TL;DR: In this article, a method for controlling access to a plurality of applications in an electronic device is disclosed, which includes receiving a voice command from a speaker for accessing a target application among the plurality of application, and verifying whether the voice command is indicative of a user authorized to access the applications based on a speaker model of the authorized user.
Patent
Method and apparatus for adjusting detection threshold for activating voice assistant function
TL;DR: In this article, a method for activating a voice assistant function in a mobile device is disclosed, which includes receiving an input sound stream by a sound sensor and determining a context of the mobile device.
Journal ArticleDOI
Environmental audio scene and activity recognition through mobile-based crowdsourcing
Kyu Woong Hwang,Soo-Young Lee +1 more
TL;DR: A crowdsourcing framework that models the combination of scene, event, and phone context to overcome environmental audio recognition issues is proposed and found that audio scenes, events, andPhone context are classified with 85.2, 77.6, and 88.9% accuracy.