T
Tae-Lim Choi
Researcher at Seoul National University
Publications - 183
Citations - 7863
Tae-Lim Choi is an academic researcher from Seoul National University. The author has contributed to research in topics: Polymerization & Polymer. The author has an hindex of 40, co-authored 171 publications receiving 6801 citations. Previous affiliations of Tae-Lim Choi include Samsung & UPRRP College of Natural Sciences.
Papers
More filters
Journal ArticleDOI
A General Model for Selectivity in Olefin Cross Metathesis
TL;DR: Application of this model has allowed for the prediction and development of selective cross metathesis reactions, culminating in unprecedented three-component intermolecular cross metAthesis reactions.
Proceedings Article
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
TL;DR: In this article, a simple and effective scheme to compress the entire CNN, which is called one-shot whole network compression, is presented, which consists of three steps: rank selection with variational Bayesian matrix factorization, Tucker decomposition on kernel tensor, and fine-tuning to recover accumulated loss of accuracy.
Journal ArticleDOI
Synthesis and Activity of Ruthenium Alkylidene Complexes Coordinated with Phosphine and N-Heterocyclic Carbene Ligands
Tina M. Trnka,John P. Morgan,Melanie S. Sanford,Thomas E. Wilhelm,Matthias Scholl,Tae-Lim Choi,Sheng Ding,Michael W. Day,Robert H. Grubbs +8 more
TL;DR: This paper reports the synthesis and characterization of a variety of ruthenium complexes coordinated with phosphine and N-heterocyclic carbene (NHC) ligands, and evaluates the olefin metathesis activity of NHC-coordinated complexes in representative RCM and ROMP reactions.
Journal ArticleDOI
Controlled Living Ring-Opening-Metathesis Polymerization by a Fast-Initiating Ruthenium Catalyst†
Tae-Lim Choi,Robert H. Grubbs +1 more
Posted Content
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
TL;DR: A simple and effective scheme to compress the entire CNN, called one-shot whole network compression, which addresses the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by the proposed scheme.