Institution
Nanyang Technological University
Education•Singapore, Singapore•
About: Nanyang Technological University is a education organization based out in Singapore, Singapore. It is known for research contribution in the topics: Computer science & Catalysis. The organization has 48003 authors who have published 112815 publications receiving 3294199 citations. The organization is also known as: NTU & Universiti Teknologi Nanyang.
Topics: Computer science, Catalysis, Graphene, Artificial neural network, Laser
Papers published on a yearly basis
Papers
More filters
••
13 Jun 2005TL;DR: Empirical evidence is given to show that the one-versus-all method using winner-takes-all strategy and the one to one method implemented by max-wins voting are inferior to another one-Versus-one method: one that uses Platt's posterior probabilities together with the pairwise coupling idea of Hastie and Tibshirani.
Abstract: Multiclass SVMs are usually implemented by combining several two-class SVMs. The one-versus-all method using winner-takes-all strategy and the one-versus-one method implemented by max-wins voting are popularly used for this purpose. In this paper we give empirical evidence to show that these methods are inferior to another one-versus-one method: one that uses Platt's posterior probabilities together with the pairwise coupling idea of Hastie and Tibshirani. The evidence is particularly strong when the training dataset is sparse.
639 citations
••
TL;DR: Additive manufacturing (AM), commonly known as three-dimensional (3D) printing or rapid prototyping, has been introduced since the late 1980s as discussed by the authors, and a considerable amount of progress has been made in this field.
Abstract: Additive manufacturing (AM), commonly known as three-dimensional (3D) printing or rapid prototyping, has been introduced since the late 1980s. Although a considerable amount of progress has been made in this field, there is still a lot of research work to be done in order to overcome the various challenges remained. Recently, one of the actively researched areas lies in the additive manufacturing of smart materials and structures. Smart materials are those materials that have the ability to change their shape or properties under the influence of external stimuli. With the introduction of smart materials, the AM-fabricated components are able to alter their shape or properties over time (the 4th dimension) as a response to the applied external stimuli. Hence, this gives rise to a new term called ‘4D printing’ to include the structural reconfiguration over time. In this paper, recent major progresses in 4D printing are reviewed, including 3D printing of enhanced smart nanocomposites, shape memory al...
639 citations
••
TL;DR: The proposed approaches aid designers working on complex engineering problems by reducing the probability of employing inappropriate local search methods in a MA, while at the same time, yielding robust and improved design search performance.
Abstract: Over the last decade, memetic algorithms (MAs) have relied on the use of a variety of different methods as the local improvement procedure. Some recent studies on the choice of local search method employed have shown that this choice significantly affects the efficiency of problem searches. Given the restricted theoretical knowledge available in this area and the limited progress made on mitigating the effects of incorrect local search method choice, we present strategies for MA control that decide, at runtime, which local method is chosen to locally improve the next chromosome. The use of multiple local methods during a MA search in the spirit of Lamarckian learning is here termed Meta-Lamarckian learning. Two adaptive strategies for Meta-Lamarckian learning are proposed in this paper. Experimental studies with Meta-Lamarckian learning strategies on continuous parametric benchmark problems are also presented. Further, the best strategy proposed is applied to a real-world aerodynamic wing design problem and encouraging results are obtained. It is shown that the proposed approaches aid designers working on complex engineering problems by reducing the probability of employing inappropriate local search methods in a MA, while at the same time, yielding robust and improved design search performance.
636 citations
••
634 citations
••
01 Feb 2017
TL;DR: PipeLayer is presented, a ReRAM-based PIM accelerator for CNNs that support both training and testing and proposes highly parallel design based on the notion of parallelism granularity and weight replication, which enables the highly pipelined execution of bothTraining and testing, without introducing the potential stalls in previous work.
Abstract: Convolution neural networks (CNNs) are the heart of deep learning applications. Recent works PRIME [1] and ISAAC [2] demonstrated the promise of using resistive random access memory (ReRAM) to perform neural computations in memory. We found that training cannot be efficiently supported with the current schemes. First, they do not consider weight update and complex data dependency in training procedure. Second, ISAAC attempts to increase system throughput with a very deep pipeline. It is only beneficial when a large number of consecutive images can be fed into the architecture. In training, the notion of batch (e.g. 64) limits the number of images can be processed consecutively, because the images in the next batch need to be processed based on the updated weights. Third, the deep pipeline in ISAAC is vulnerable to pipeline bubbles and execution stall. In this paper, we present PipeLayer, a ReRAM-based PIM accelerator for CNNs that support both training and testing. We analyze data dependency and weight update in training algorithms and propose efficient pipeline to exploit inter-layer parallelism. To exploit intra-layer parallelism, we propose highly parallel design based on the notion of parallelism granularity and weight replication. With these design choices, PipeLayer enables the highly pipelined execution of both training and testing, without introducing the potential stalls in previous work. The experiment results show that, PipeLayer achieves the speedups of 42.45x compared with GPU platform on average. The average energy saving of PipeLayer compared with GPU implementation is 7.17x.
633 citations
Authors
Showing all 48605 results
Name | H-index | Papers | Citations |
---|---|---|---|
Michael Grätzel | 248 | 1423 | 303599 |
Yang Gao | 168 | 2047 | 146301 |
Gang Chen | 167 | 3372 | 149819 |
Chad A. Mirkin | 164 | 1078 | 134254 |
Hua Zhang | 163 | 1503 | 116769 |
Xiang Zhang | 154 | 1733 | 117576 |
Vivek Sharma | 150 | 3030 | 136228 |
Seeram Ramakrishna | 147 | 1552 | 99284 |
Frede Blaabjerg | 147 | 2161 | 112017 |
Yi Yang | 143 | 2456 | 92268 |
Joseph J.Y. Sung | 142 | 1240 | 92035 |
Shi-Zhang Qiao | 142 | 523 | 80888 |
Paul M. Matthews | 140 | 617 | 88802 |
Bin Liu | 138 | 2181 | 87085 |
George C. Schatz | 137 | 1155 | 94910 |