H
Hoi-Jun Yoo
Researcher at KAIST
Publications - 623
Citations - 13052
Hoi-Jun Yoo is an academic researcher from KAIST. The author has contributed to research in topics: CMOS & Transceiver. The author has an hindex of 56, co-authored 588 publications receiving 11072 citations. Previous affiliations of Hoi-Jun Yoo include Samsung & IMEC.
Papers
More filters
Journal ArticleDOI
The Human Body Characteristics as a Signal Transmission Medium for Intrabody Communication
TL;DR: In this paper, the human body characteristics as a signal transmission medium are studied for the application of intrabody communication, and a distributed RC model is developed to analyze the large variation of the channel properties according to the frequency and channel length.
Journal ArticleDOI
Neuro-inspired computing chips
Wenqiang Zhang,Bin Gao,Jianshi Tang,Peng Yao,Shimeng Yu,Meng-Fan Chang,Hoi-Jun Yoo,He Qian,Huaqiang Wu +8 more
TL;DR: The development of neuro-inspired computing chips and their key benchmarking metrics are reviewed, providing a co-design tool chain and proposing a roadmap for future large-scale chips are provided and a future electronic design automation tool chain is proposed.
Journal ArticleDOI
1.25-Gb/s regulated cascode CMOS transimpedance amplifier for Gigabit Ethernet applications
Sung Min Park,Hoi-Jun Yoo +1 more
TL;DR: In this article, a transimpedance amplifier was realized in a 0.6/spl mu/m digital CMOS technology for Gigabit Ethernet applications, which exploits the regulated cascode (RGC) configuration as the input stage, thus achieving as large effective input transconductance as Si Bipolar or GaAs MESFET.
Proceedings ArticleDOI
UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision
TL;DR: A unified DNN accelerator with fully-variable weight bit-precision is required for the energy-optimal operation of DNNs within a mobile environment.
Proceedings ArticleDOI
14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks
TL;DR: A highly reconfigurable CNN-RNN processor with high energy-efficiency is desirable to support general-purpose deep neural networks (DNNs).