scispace - formally typeset
Y

Yi-Chang Lu

Researcher at National Taiwan University

Publications -  87
Citations -  883

Yi-Chang Lu is an academic researcher from National Taiwan University. The author has contributed to research in topics: Hardware acceleration & Circuit design. The author has an hindex of 12, co-authored 85 publications receiving 811 citations. Previous affiliations of Yi-Chang Lu include Stanford University.

Papers
More filters
Journal ArticleDOI

Performance Benefits of Monolithically Stacked 3-D FPGA

TL;DR: The performance benefits of a monolithically stacked three-dimensional (3-D) field-programmable gate array (FPGA), whereby the programming overhead of an FPGA is stacked on top of a standard CMOS layer containing logic blocks and interconnects, are investigated.
Proceedings ArticleDOI

Performance benefits of monolithically stacked 3D-FPGA

TL;DR: The performance benefits of a monolithically stacked three-dimensional (3-D) field-programmable gate array (FPGA), whereby the programming overhead of an FPGA is stacked on top of a standard CMOS layer containing logic blocks and interconnects, are investigated.
Proceedings ArticleDOI

Thermal modeling for 3D-ICs with integrated microchannel cooling

TL;DR: A fast and accurate thermal-wake aware thermal model for integrated microchannel 3D ICs, which achieves more than 400× speed up and only 2.0% error in comparison with a commercial numerical simulation tool is presented.
Journal ArticleDOI

Thermal Modeling and Analysis for 3-D ICs With Integrated Microchannel Cooling

TL;DR: This paper presents a fast and accurate thermal-wake aware thermal model for integrated microchannel 3-D ICs, and shows that the proposed model can be used to reduce peak temperatures, which is considered important for 3- D IC designs.
Proceedings ArticleDOI

Deep Co-occurrence Feature Learning for Visual Object Recognition

TL;DR: A new network layer is introduced that can extend a convolutional layer to encode the co-occurrence between the visual parts detected by the numerous neurons, instead of a few pre-specified parts, and is end-to-end trainable.