scispace - formally typeset
Search or ask a question
Institution

Nanyang Technological University

EducationSingapore, Singapore
About: Nanyang Technological University is a education organization based out in Singapore, Singapore. It is known for research contribution in the topics: Computer science & Catalysis. The organization has 48003 authors who have published 112815 publications receiving 3294199 citations. The organization is also known as: NTU & Universiti Teknologi Nanyang.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the application of the wet impregnation technique in the development of Ni-free Cu-based composite anodes, doped CeO2-impregnated (La, Sr)MnO3 (LSM) cathodes and Ni anodes was discussed.
Abstract: Development of solid oxide fuel cells (SOFC) for operation at intermediate temperatures of 600–800 °C with hydrocarbon fuels requires a cathode and anode with high electrocatalytic activity for O2 reduction and direct oxidation of hydrocarbon fuels, respectively. Wet impregnation is a well known method in the development of heterogeneous catalysts. Surprisingly, very few have concentrated on the application of the wet impregnation technique to deposit nano-sized particles into the established electrode structure of the SOFC. This paper reviews and discusses the progress in the application of the wet impregnation technique in the development of Ni-free Cu-based composite anodes, doped CeO2-impregnated (La, Sr)MnO3 (LSM) cathodes and Ni anodes, Co3O4-infiltrated cathodes and precious metal-impregnated electrodes. Enhancement in the electrode microstructure and cell performance is substantial, showing the great potential of the wet impregnation method in the development of high performance and nano-structured electrodes with specific functions. However, the long-term stability of the impregnated electrode structure needs to be addressed.

431 citations

Journal ArticleDOI
TL;DR: This paper presents a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples, and minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace.
Abstract: The regularization principals [31] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.

430 citations

Proceedings ArticleDOI
02 Aug 2019
TL;DR: Self Attention Distillation (SAD) as discussed by the authors is a knowledge distillation approach, which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels.
Abstract: Training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals inherent in lane annotations. Without learning from much richer context, these models often fail in challenging scenarios, e.g., severe occlusion, ambiguous lanes, and poor lighting conditions. In this paper, we present a novel knowledge distillation approach, i.e., Self Attention Distillation (SAD), which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels. Specifically, we observe that attention maps extracted from a model trained to a reasonable level would encode rich contextual information. The valuable contextual information can be used as a form of ‘free’ supervision for further representation learning through performing top- down and layer-wise attention distillation within the net- work itself. SAD can be easily incorporated in any feed- forward convolutional neural networks (CNN) and does not increase the inference time. We validate SAD on three popular lane detection benchmarks (TuSimple, CULane and BDD100K) using lightweight models such as ENet, ResNet- 18 and ResNet-34. The lightest model, ENet-SAD, performs comparatively or even surpasses existing algorithms. Notably, ENet-SAD has 20 × fewer parameters and runs 10 × faster compared to the state-of-the-art SCNN, while still achieving compelling performance in all benchmarks.

429 citations

Journal ArticleDOI
TL;DR: A thin polymer shell helps V2O5 a lot and an excellent high-rate capability and ultrastable cycling up to 1000 cycles are demonstrated.
Abstract: A thin polymer shell helps V2O5 a lot. Short V2O5 nanobelts are grown directly on 3D graphite foam as a lithium-ion battery (LIB) cathode material. A further coating of a poly(3,4-ethylenedioxythiophene) (PEDOT) thin shell is the key to the high performance. An excellent high-rate capability and ultrastable cycling up to 1000 cycles are demonstrated.

429 citations

Journal ArticleDOI
11 Oct 2017-Nature
TL;DR: This work curated an extensive set of ADAR1 and ADAR2 targets and showed that many editing sites display distinct tissue-specific regulation by the ADAR enzymes in vivo, suggesting stronger cis-directed regulation of RNA editing for most sites, although the small set of conserved coding sites is under stronger trans-regulation.
Abstract: Adenosine-to-inosine (A-to-I) RNA editing is a conserved post-transcriptional mechanism mediated by ADAR enzymes that diversifies the transcriptome by altering selected nucleotides in RNA molecules. Although many editing sites have recently been discovered, the extent to which most sites are edited and how the editing is regulated in different biological contexts are not fully understood. Here we report dynamic spatiotemporal patterns and new regulators of RNA editing, discovered through an extensive profiling of A-to-I RNA editing in 8,551 human samples (representing 53 body sites from 552 individuals) from the Genotype-Tissue Expression (GTEx) project and in hundreds of other primate and mouse samples. We show that editing levels in non-repetitive coding regions vary more between tissues than editing levels in repetitive regions. Globally, ADAR1 is the primary editor of repetitive sites and ADAR2 is the primary editor of non-repetitive coding sites, whereas the catalytically inactive ADAR3 predominantly acts as an inhibitor of editing. Cross-species analysis of RNA editing in several tissues revealed that species, rather than tissue type, is the primary determinant of editing levels, suggesting stronger cis-directed regulation of RNA editing for most sites, although the small set of conserved coding sites is under stronger trans-regulation. In addition, we curated an extensive set of ADAR1 and ADAR2 targets and showed that many editing sites display distinct tissue-specific regulation by the ADAR enzymes in vivo. Further analysis of the GTEx data revealed several potential regulators of editing, such as AIMP2, which reduces editing in muscles by enhancing the degradation of the ADAR proteins. Collectively, our work provides insights into the complex cis- and trans-regulation of A-to-I editing.

429 citations


Authors

Showing all 48605 results

NameH-indexPapersCitations
Michael Grätzel2481423303599
Yang Gao1682047146301
Gang Chen1673372149819
Chad A. Mirkin1641078134254
Hua Zhang1631503116769
Xiang Zhang1541733117576
Vivek Sharma1503030136228
Seeram Ramakrishna147155299284
Frede Blaabjerg1472161112017
Yi Yang143245692268
Joseph J.Y. Sung142124092035
Shi-Zhang Qiao14252380888
Paul M. Matthews14061788802
Bin Liu138218187085
George C. Schatz137115594910
Network Information
Related Institutions (5)
Hong Kong University of Science and Technology
52.4K papers, 1.9M citations

96% related

National University of Singapore
165.4K papers, 5.4M citations

96% related

Georgia Institute of Technology
119K papers, 4.6M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

95% related

Royal Institute of Technology
68.4K papers, 1.9M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023201
20221,324
20217,990
20208,387
20197,843
20187,247