H
Homin Kwon
Researcher at Arizona State University
Publications - 21
Citations - 249
Homin Kwon is an academic researcher from Arizona State University. The author has contributed to research in topics: Wireless sensor network & Signal processing. The author has an hindex of 11, co-authored 21 publications receiving 234 citations. Previous affiliations of Homin Kwon include Samsung.
Papers
More filters
Journal ArticleDOI
A Space-Variant Luminance Map based Color Image Enhancement
TL;DR: A color image enhancement method that makes use of a space-variant luminance map (SVLM) for the local brightness characterization and resulted in the lowest absolute mean brightness error (AMBE) in a low-complexity fashion.
Journal ArticleDOI
Experiments With Sensor Motes and Java-DSP
TL;DR: A software interface that enables the Java-digital signal processing (J-DSP) visual programming environment to communicate in a two-way manner with a wireless sensor network is presented.
Journal ArticleDOI
An overview of recent advances on distributed and agile sensing algorithms and implementation
Mahesh K. Banavar,Jun Jason Zhang,Bhavana Chakraborty,Homin Kwon,Ying Li,Huaiguang Jiang,Andreas Spanias,Cihan Tepedelenlioglu,Chaitali Chakrabarti,Antonia Papandreou-Suppappola +9 more
TL;DR: An overview of recent work on distributed and agile sensing algorithms and their implementation is provided, which includes methods for adapting the sensor transmit waveform to match the environment and to optimize the selected performance metric.
Proceedings ArticleDOI
Real-Time Implementation of a Distributed Voice Activity Detector
TL;DR: A low-complexity voice activity detector and a gender classifier for implementation on the Crossbow sensor motes are presented and a decision fusion algorithm that resides at the base station is also implemented.
Proceedings ArticleDOI
Real-time acoustic monitoring using wireless sensor motes
TL;DR: This paper proposes a voice activity detector and a simple gender classifier for use in a distributed acoustic sensing system that makes use of low-complexity audio features and a pre-trained regression tree to classify incoming speech by gender.