scispace - formally typeset
K

Kwanho Kim

Researcher at KAIST

Publications -  35
Citations -  715

Kwanho Kim is an academic researcher from KAIST. The author has contributed to research in topics: Network on a chip & SIMD. The author has an hindex of 14, co-authored 34 publications receiving 700 citations.

Papers
More filters
Journal ArticleDOI

A 201.4 GOPS 496 mW Real-Time Multi-Object Recognition Processor With Bio-Inspired Neural Perception Engine

TL;DR: In the proposed hardware architecture, three recognition tasks (visual perception, descriptor generation, and object decision) are directly mapped to the neural perception engine, 16 SIMD processors including 128 processing elements, and decision processor and executed in the pipeline to maximize throughput of the object recognition.
Journal ArticleDOI

A 125 GOPS 583 mW Network-on-Chip Based Parallel Processor With Bio-Inspired Visual Attention Engine

TL;DR: A network-on-chip (NoC) based parallel processor is presented for bio-inspired real-time object recognition with visual attention algorithm, which achieves a peak performance of 125 GOPS and 22 frames/sec object recognition while dissipating 583 mW at 1.2 V.
Proceedings ArticleDOI

A 125GOPS 583mW Network-on-Chip Based Parallel Processor with Bio-inspired Visual-Attention Engine

TL;DR: A 125 GOPS NoC-based parallel processor with a bio-inspired visual attention engine (VAE) exploits both data and object-level parallelism while dissipating 583 mW by packet-based power management.
Proceedings ArticleDOI

Adaptive network-on-chip with wave-front train serialization scheme

TL;DR: An adaptive network-on-chip (NoC) is implemented with self-calibration and dynamic bandwidth control, which calibrates skew between clock domains automatically for reliable mesochronous communication and a new on-chip serialization scheme, wave-front train (WAFT), is used in the NoC chip to realize high-performance serial link with minimum overhead.
Journal ArticleDOI

Familiarity based unified visual attention model for fast and robust object recognition

TL;DR: The unified visual attention model (UVAM) which combines top-down familiarity and bottom-up saliency is applied to SIFT based object recognition and reduces recognition times by 2.7x and 2x, respectively, with no reduction in recognition accuracy.