Institution
Nanyang Technological University
Education•Singapore, Singapore•
About: Nanyang Technological University is a education organization based out in Singapore, Singapore. It is known for research contribution in the topics: Computer science & Catalysis. The organization has 48003 authors who have published 112815 publications receiving 3294199 citations. The organization is also known as: NTU & Universiti Teknologi Nanyang.
Topics: Computer science, Catalysis, Graphene, Artificial neural network, Laser
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers and has an ultra-fast response time of only 0.2 ms, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface.
Abstract: A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers. The combination of graphene and anisotropic microstructures endows this sensor with an ultra-high sensitivity of -5.53 kPa(-1) , an ultra-fast response time of only 0.2 ms, as well as good reliability, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface.
513 citations
••
13 Dec 2018TL;DR: The proposed model combines convolutional neural networks on graphs to identify spatial structures and RNN to find dynamic patterns in data structured by an arbitrary graph.
Abstract: This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks (RNN) to data structured by an arbitrary graph. The structured sequences can represent series of frames in videos, spatio-temporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model combines convolutional neural networks (CNN) on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN, and apply the models to two practical problems: predicting moving MNIST data, and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed.
513 citations
••
TL;DR: A simple and scalable strategy for synthesizing hierarchical porous NiCo(2)O(4) nanowires which exhibit a high specific capacitance with excellent rate performance and cycling stability is demonstrated.
512 citations
••
TL;DR: This paper formalizes the concept of evolutionary multitasking and proposes an algorithm to handle multiple optimization problems simultaneously using a single population of evolving individuals and develops a cross-domain optimization platform that allows one to solve diverse problems concurrently.
Abstract: The design of evolutionary algorithms has typically been focused on efficiently solving a single optimization problem at a time. Despite the implicit parallelism of population-based search, no attempt has yet been made to multitask, i.e., to solve multiple optimization problems simultaneously using a single population of evolving individuals. Accordingly, this paper introduces evolutionary multitasking as a new paradigm in the field of optimization and evolutionary computation. We first formalize the concept of evolutionary multitasking and then propose an algorithm to handle such problems. The methodology is inspired by biocultural models of multifactorial inheritance , which explain the transmission of complex developmental traits to offspring through the interactions of genetic and cultural factors. Furthermore, we develop a cross-domain optimization platform that allows one to solve diverse problems concurrently. The numerical experiments reveal several potential advantages of implicit genetic transfer in a multitasking environment. Most notably, we discover that the creation and transfer of refined genetic material can often lead to accelerated convergence for a variety of complex optimization functions.
512 citations
••
07 Jun 2015
TL;DR: This work shows that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs), while rather than using the networks as a blackbox feature extractor, it customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters.
Abstract: Contour detection serves as the basis of a variety of computer vision tasks such as image segmentation and object recognition. The mainstream works to address this problem focus on designing engineered gradient features. In this work, we show that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs). While rather than using the networks as a blackbox feature extractor, we customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters. A new loss function, named positive-sharing loss, in which each subclass shares the loss for the whole positive class, is proposed to learn the parameters. Compared to the sofmax loss function, the proposed one, introduces an extra regularizer to emphasizes the losses for the positive and negative classes, which facilitates to explore more discriminative features. Our experimental results demonstrate that learned deep features can achieve top performance on Berkeley Segmentation Dataset and Benchmark (BSDS500) and obtain competitive cross dataset generalization result on the NYUD dataset.
512 citations
Authors
Showing all 48605 results
Name | H-index | Papers | Citations |
---|---|---|---|
Michael Grätzel | 248 | 1423 | 303599 |
Yang Gao | 168 | 2047 | 146301 |
Gang Chen | 167 | 3372 | 149819 |
Chad A. Mirkin | 164 | 1078 | 134254 |
Hua Zhang | 163 | 1503 | 116769 |
Xiang Zhang | 154 | 1733 | 117576 |
Vivek Sharma | 150 | 3030 | 136228 |
Seeram Ramakrishna | 147 | 1552 | 99284 |
Frede Blaabjerg | 147 | 2161 | 112017 |
Yi Yang | 143 | 2456 | 92268 |
Joseph J.Y. Sung | 142 | 1240 | 92035 |
Shi-Zhang Qiao | 142 | 523 | 80888 |
Paul M. Matthews | 140 | 617 | 88802 |
Bin Liu | 138 | 2181 | 87085 |
George C. Schatz | 137 | 1155 | 94910 |