scispace - formally typeset
Search or ask a question
Institution

Nanyang Technological University

EducationSingapore, Singapore
About: Nanyang Technological University is a education organization based out in Singapore, Singapore. It is known for research contribution in the topics: Computer science & Catalysis. The organization has 48003 authors who have published 112815 publications receiving 3294199 citations. The organization is also known as: NTU & Universiti Teknologi Nanyang.


Papers
More filters
Journal ArticleDOI
01 Sep 2014-Small
TL;DR: A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers and has an ultra-fast response time of only 0.2 ms, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface.
Abstract: A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers. The combination of graphene and anisotropic microstructures endows this sensor with an ultra-high sensitivity of -5.53 kPa(-1) , an ultra-fast response time of only 0.2 ms, as well as good reliability, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface.

513 citations

Book ChapterDOI
13 Dec 2018
TL;DR: The proposed model combines convolutional neural networks on graphs to identify spatial structures and RNN to find dynamic patterns in data structured by an arbitrary graph.
Abstract: This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks (RNN) to data structured by an arbitrary graph. The structured sequences can represent series of frames in videos, spatio-temporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model combines convolutional neural networks (CNN) on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN, and apply the models to two practical problems: predicting moving MNIST data, and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed.

513 citations

Journal ArticleDOI
TL;DR: A simple and scalable strategy for synthesizing hierarchical porous NiCo(2)O(4) nanowires which exhibit a high specific capacitance with excellent rate performance and cycling stability is demonstrated.

512 citations

Journal ArticleDOI
TL;DR: This paper formalizes the concept of evolutionary multitasking and proposes an algorithm to handle multiple optimization problems simultaneously using a single population of evolving individuals and develops a cross-domain optimization platform that allows one to solve diverse problems concurrently.
Abstract: The design of evolutionary algorithms has typically been focused on efficiently solving a single optimization problem at a time. Despite the implicit parallelism of population-based search, no attempt has yet been made to multitask, i.e., to solve multiple optimization problems simultaneously using a single population of evolving individuals. Accordingly, this paper introduces evolutionary multitasking as a new paradigm in the field of optimization and evolutionary computation. We first formalize the concept of evolutionary multitasking and then propose an algorithm to handle such problems. The methodology is inspired by biocultural models of multifactorial inheritance , which explain the transmission of complex developmental traits to offspring through the interactions of genetic and cultural factors. Furthermore, we develop a cross-domain optimization platform that allows one to solve diverse problems concurrently. The numerical experiments reveal several potential advantages of implicit genetic transfer in a multitasking environment. Most notably, we discover that the creation and transfer of refined genetic material can often lead to accelerated convergence for a variety of complex optimization functions.

512 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work shows that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs), while rather than using the networks as a blackbox feature extractor, it customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters.
Abstract: Contour detection serves as the basis of a variety of computer vision tasks such as image segmentation and object recognition. The mainstream works to address this problem focus on designing engineered gradient features. In this work, we show that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs). While rather than using the networks as a blackbox feature extractor, we customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters. A new loss function, named positive-sharing loss, in which each subclass shares the loss for the whole positive class, is proposed to learn the parameters. Compared to the sofmax loss function, the proposed one, introduces an extra regularizer to emphasizes the losses for the positive and negative classes, which facilitates to explore more discriminative features. Our experimental results demonstrate that learned deep features can achieve top performance on Berkeley Segmentation Dataset and Benchmark (BSDS500) and obtain competitive cross dataset generalization result on the NYUD dataset.

512 citations


Authors

Showing all 48605 results

NameH-indexPapersCitations
Michael Grätzel2481423303599
Yang Gao1682047146301
Gang Chen1673372149819
Chad A. Mirkin1641078134254
Hua Zhang1631503116769
Xiang Zhang1541733117576
Vivek Sharma1503030136228
Seeram Ramakrishna147155299284
Frede Blaabjerg1472161112017
Yi Yang143245692268
Joseph J.Y. Sung142124092035
Shi-Zhang Qiao14252380888
Paul M. Matthews14061788802
Bin Liu138218187085
George C. Schatz137115594910
Network Information
Related Institutions (5)
Hong Kong University of Science and Technology
52.4K papers, 1.9M citations

96% related

National University of Singapore
165.4K papers, 5.4M citations

96% related

Georgia Institute of Technology
119K papers, 4.6M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

95% related

Royal Institute of Technology
68.4K papers, 1.9M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023201
20221,324
20217,990
20208,387
20197,843
20187,247