B
Byeong Yong Kong
Researcher at Kongju National University
Publications - 34
Citations - 204
Byeong Yong Kong is an academic researcher from Kongju National University. The author has contributed to research in topics: Computer science & MIMO. The author has an hindex of 6, co-authored 28 publications receiving 131 citations. Previous affiliations of Byeong Yong Kong include KAIST.
Papers
More filters
Journal ArticleDOI
Efficient Sorting Architecture for Successive-Cancellation-List Decoding of Polar Codes
TL;DR: This brief presents an efficient sorting architecture for successive-cancellation-list decoding of polar codes that requires less than 50% of the compare-and-swap units demanded by the area-efficient sorting networks in the literature.
Proceedings ArticleDOI
Low-complexity symbol detection for massive MIMO uplink based on Jacobi method
Byeong Yong Kong,In-Cheol Park +1 more
TL;DR: Owing to the elimination of matrix inversion and the efficient initial estimate, the proposed algorithm achieves near-optimal error-rate performance with fewer computations than the state-of-the-art schemes.
Journal ArticleDOI
Real-Time SSDLite Object Detection on FPGA
TL;DR: This article proposes an efficient computing system for real-time SSDLite object detection on FPGA devices, which includes novel hardware architecture and system optimization techniques.
Journal ArticleDOI
FIR Filter Synthesis Based on Interleaved Processing of Coefficient Generation and Multiplier-Block Synthesis
Byeong Yong Kong,In-Cheol Park +1 more
TL;DR: An efficient filter synthesis algorithm is proposed to minimize the number of adders required in the design of finite-impulse response filters and the concept of sensitivity is developed to reduce the complexity of computing the variable ranges of coefficients.
Journal ArticleDOI
Retrain-Less Weight Quantization for Multiplier-Less Convolutional Neural Networks
TL;DR: An approximate signed digit representation (ASD) which quantizes the weights of convolutional neural networks (CNNs) in order to make multiplier-less CNNs without performing any retraining process is presented, attaining accuracy comparable to that of full-precision models on the Image classification tasks without going through retraining.