scispace - formally typeset
S

Song-Nien Tang

Researcher at National Tsing Hua University

Publications -  5
Citations -  199

Song-Nien Tang is an academic researcher from National Tsing Hua University. The author has contributed to research in topics: Orthogonal frequency-division multiplexing & Computer science. The author has an hindex of 2, co-authored 3 publications receiving 190 citations.

Papers
More filters
Journal ArticleDOI

A 2.4-GS/s FFT Processor for OFDM-Based WPAN Applications

TL;DR: A novel simplification method to reduce the hardware cost in multiplication units of the multiple-path FFT approach is proposed and a multidata scaling scheme to reduce wordlengths while preserving the signal-to-quantization-noise ratio is presented.
Journal ArticleDOI

An Area- and Energy-Efficient Multimode FFT Processor for WPAN/WLAN/WMAN Systems

TL;DR: The proposed multimode FFT chip is more area- and energy-efficient as it is able to provide relatively higher throughput per unit area or per unit power consumption and the power scalability across FFT modes is relatively exhibited in the proposed FFT processor.
Proceedings ArticleDOI

An energy-efficient MMAS FFT processor for high-rate WPAN applications

TL;DR: By using a proposed mixed-memory-access-scheduling (MMAS) scheme in the multipath-delay-feedback (MDF) architecture, energy-efficiency is improved for it provides the same high throughput rate with relatively lower power consumption.
Journal ArticleDOI

Area-Efficient Parallel Multiplication Units for CNN Accelerators With Output Channel Parallelization

TL;DR: In this article , the authors proposed area-efficient parallel multiplication unit (PMU) designs for a CNN accelerator that uses parallelization on the output channels of a CNN layer, which parallel multiplies a common feature map pixel with multiple CNN kernel weights.
Proceedings ArticleDOI

Long-Length Accumulation Unit with Efficient Biasing for Binary Weight CNNs

TL;DR: In this article , the authors proposed an efficient biasing scheme for BCNN AUs based on probabilistic derivation, aiming to reduce the operation-bit-width by truncating several least significant bits (LSBs) of all operands while compensating for the truncation error.