Institution
Nanyang Technological University
Education•Singapore, Singapore•
About: Nanyang Technological University is a education organization based out in Singapore, Singapore. It is known for research contribution in the topics: Computer science & Catalysis. The organization has 48003 authors who have published 112815 publications receiving 3294199 citations. The organization is also known as: NTU & Universiti Teknologi Nanyang.
Topics: Computer science, Catalysis, Graphene, Artificial neural network, Laser
Papers published on a yearly basis
Papers
More filters
••
25 Feb 2006-Materials Science and Engineering A-structural Materials Properties Microstructure and Processing
TL;DR: In this paper, the application of the wet impregnation technique in the development of Ni-free Cu-based composite anodes, doped CeO2-impregnated (La, Sr)MnO3 (LSM) cathodes and Ni anodes was discussed.
Abstract: Development of solid oxide fuel cells (SOFC) for operation at intermediate temperatures of 600–800 °C with hydrocarbon fuels requires a cathode and anode with high electrocatalytic activity for O2 reduction and direct oxidation of hydrocarbon fuels, respectively. Wet impregnation is a well known method in the development of heterogeneous catalysts. Surprisingly, very few have concentrated on the application of the wet impregnation technique to deposit nano-sized particles into the established electrode structure of the SOFC. This paper reviews and discusses the progress in the application of the wet impregnation technique in the development of Ni-free Cu-based composite anodes, doped CeO2-impregnated (La, Sr)MnO3 (LSM) cathodes and Ni anodes, Co3O4-infiltrated cathodes and precious metal-impregnated electrodes. Enhancement in the electrode microstructure and cell performance is substantial, showing the great potential of the wet impregnation method in the development of high performance and nano-structured electrodes with specific functions. However, the long-term stability of the impregnated electrode structure needs to be addressed.
431 citations
••
TL;DR: This paper presents a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples, and minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace.
Abstract: The regularization principals [31] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.
430 citations
••
02 Aug 2019TL;DR: Self Attention Distillation (SAD) as discussed by the authors is a knowledge distillation approach, which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels.
Abstract: Training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals inherent in lane annotations. Without learning from much richer context, these models often fail in challenging scenarios, e.g., severe occlusion, ambiguous lanes, and poor lighting conditions. In this paper, we present a novel knowledge distillation approach, i.e., Self Attention Distillation (SAD), which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels. Specifically, we observe that attention maps extracted from a model trained to a reasonable level would encode rich contextual information. The valuable contextual information can be used as a form of ‘free’ supervision for further representation learning through performing top- down and layer-wise attention distillation within the net- work itself. SAD can be easily incorporated in any feed- forward convolutional neural networks (CNN) and does not increase the inference time. We validate SAD on three popular lane detection benchmarks (TuSimple, CULane and BDD100K) using lightweight models such as ENet, ResNet- 18 and ResNet-34. The lightest model, ENet-SAD, performs comparatively or even surpasses existing algorithms. Notably, ENet-SAD has 20 × fewer parameters and runs 10 × faster compared to the state-of-the-art SCNN, while still achieving compelling performance in all benchmarks.
429 citations
••
TL;DR: A thin polymer shell helps V2O5 a lot and an excellent high-rate capability and ultrastable cycling up to 1000 cycles are demonstrated.
Abstract: A thin polymer shell helps V2O5 a lot. Short V2O5 nanobelts are grown directly on 3D graphite foam as a lithium-ion battery (LIB) cathode material. A further coating of a poly(3,4-ethylenedioxythiophene) (PEDOT) thin shell is the key to the high performance. An excellent high-rate capability and ultrastable cycling up to 1000 cycles are demonstrated.
429 citations
••
Stanford University1, Nanyang Technological University2, Agency for Science, Technology and Research3, Wistar Institute4, St. Vincent's Institute of Medical Research5, University of Edinburgh6, University of California, Santa Barbara7, Washington University in St. Louis8, University of Trieste9, University of California, San Francisco10, Central European Institute of Technology11, St. Vincent's Health System12
TL;DR: This work curated an extensive set of ADAR1 and ADAR2 targets and showed that many editing sites display distinct tissue-specific regulation by the ADAR enzymes in vivo, suggesting stronger cis-directed regulation of RNA editing for most sites, although the small set of conserved coding sites is under stronger trans-regulation.
Abstract: Adenosine-to-inosine (A-to-I) RNA editing is a conserved post-transcriptional mechanism mediated by ADAR enzymes that diversifies the transcriptome by altering selected nucleotides in RNA molecules. Although many editing sites have recently been discovered, the extent to which most sites are edited and how the editing is regulated in different biological contexts are not fully understood. Here we report dynamic spatiotemporal patterns and new regulators of RNA editing, discovered through an extensive profiling of A-to-I RNA editing in 8,551 human samples (representing 53 body sites from 552 individuals) from the Genotype-Tissue Expression (GTEx) project and in hundreds of other primate and mouse samples. We show that editing levels in non-repetitive coding regions vary more between tissues than editing levels in repetitive regions. Globally, ADAR1 is the primary editor of repetitive sites and ADAR2 is the primary editor of non-repetitive coding sites, whereas the catalytically inactive ADAR3 predominantly acts as an inhibitor of editing. Cross-species analysis of RNA editing in several tissues revealed that species, rather than tissue type, is the primary determinant of editing levels, suggesting stronger cis-directed regulation of RNA editing for most sites, although the small set of conserved coding sites is under stronger trans-regulation. In addition, we curated an extensive set of ADAR1 and ADAR2 targets and showed that many editing sites display distinct tissue-specific regulation by the ADAR enzymes in vivo. Further analysis of the GTEx data revealed several potential regulators of editing, such as AIMP2, which reduces editing in muscles by enhancing the degradation of the ADAR proteins. Collectively, our work provides insights into the complex cis- and trans-regulation of A-to-I editing.
429 citations
Authors
Showing all 48605 results
Name | H-index | Papers | Citations |
---|---|---|---|
Michael Grätzel | 248 | 1423 | 303599 |
Yang Gao | 168 | 2047 | 146301 |
Gang Chen | 167 | 3372 | 149819 |
Chad A. Mirkin | 164 | 1078 | 134254 |
Hua Zhang | 163 | 1503 | 116769 |
Xiang Zhang | 154 | 1733 | 117576 |
Vivek Sharma | 150 | 3030 | 136228 |
Seeram Ramakrishna | 147 | 1552 | 99284 |
Frede Blaabjerg | 147 | 2161 | 112017 |
Yi Yang | 143 | 2456 | 92268 |
Joseph J.Y. Sung | 142 | 1240 | 92035 |
Shi-Zhang Qiao | 142 | 523 | 80888 |
Paul M. Matthews | 140 | 617 | 88802 |
Bin Liu | 138 | 2181 | 87085 |
George C. Schatz | 137 | 1155 | 94910 |