scispace - formally typeset
Search or ask a question
Institution

Nanyang Technological University

EducationSingapore, Singapore
About: Nanyang Technological University is a education organization based out in Singapore, Singapore. It is known for research contribution in the topics: Computer science & Catalysis. The organization has 48003 authors who have published 112815 publications receiving 3294199 citations. The organization is also known as: NTU & Universiti Teknologi Nanyang.


Papers
More filters
Book ChapterDOI
13 Jun 2005
TL;DR: Empirical evidence is given to show that the one-versus-all method using winner-takes-all strategy and the one to one method implemented by max-wins voting are inferior to another one-Versus-one method: one that uses Platt's posterior probabilities together with the pairwise coupling idea of Hastie and Tibshirani.
Abstract: Multiclass SVMs are usually implemented by combining several two-class SVMs. The one-versus-all method using winner-takes-all strategy and the one-versus-one method implemented by max-wins voting are popularly used for this purpose. In this paper we give empirical evidence to show that these methods are inferior to another one-versus-one method: one that uses Platt's posterior probabilities together with the pairwise coupling idea of Hastie and Tibshirani. The evidence is particularly strong when the training dataset is sparse.

639 citations

Journal ArticleDOI
TL;DR: Additive manufacturing (AM), commonly known as three-dimensional (3D) printing or rapid prototyping, has been introduced since the late 1980s as discussed by the authors, and a considerable amount of progress has been made in this field.
Abstract: Additive manufacturing (AM), commonly known as three-dimensional (3D) printing or rapid prototyping, has been introduced since the late 1980s. Although a considerable amount of progress has been made in this field, there is still a lot of research work to be done in order to overcome the various challenges remained. Recently, one of the actively researched areas lies in the additive manufacturing of smart materials and structures. Smart materials are those materials that have the ability to change their shape or properties under the influence of external stimuli. With the introduction of smart materials, the AM-fabricated components are able to alter their shape or properties over time (the 4th dimension) as a response to the applied external stimuli. Hence, this gives rise to a new term called ‘4D printing’ to include the structural reconfiguration over time. In this paper, recent major progresses in 4D printing are reviewed, including 3D printing of enhanced smart nanocomposites, shape memory al...

639 citations

Journal ArticleDOI
TL;DR: The proposed approaches aid designers working on complex engineering problems by reducing the probability of employing inappropriate local search methods in a MA, while at the same time, yielding robust and improved design search performance.
Abstract: Over the last decade, memetic algorithms (MAs) have relied on the use of a variety of different methods as the local improvement procedure. Some recent studies on the choice of local search method employed have shown that this choice significantly affects the efficiency of problem searches. Given the restricted theoretical knowledge available in this area and the limited progress made on mitigating the effects of incorrect local search method choice, we present strategies for MA control that decide, at runtime, which local method is chosen to locally improve the next chromosome. The use of multiple local methods during a MA search in the spirit of Lamarckian learning is here termed Meta-Lamarckian learning. Two adaptive strategies for Meta-Lamarckian learning are proposed in this paper. Experimental studies with Meta-Lamarckian learning strategies on continuous parametric benchmark problems are also presented. Further, the best strategy proposed is applied to a real-world aerodynamic wing design problem and encouraging results are obtained. It is shown that the proposed approaches aid designers working on complex engineering problems by reducing the probability of employing inappropriate local search methods in a MA, while at the same time, yielding robust and improved design search performance.

636 citations

Proceedings ArticleDOI
01 Feb 2017
TL;DR: PipeLayer is presented, a ReRAM-based PIM accelerator for CNNs that support both training and testing and proposes highly parallel design based on the notion of parallelism granularity and weight replication, which enables the highly pipelined execution of bothTraining and testing, without introducing the potential stalls in previous work.
Abstract: Convolution neural networks (CNNs) are the heart of deep learning applications. Recent works PRIME [1] and ISAAC [2] demonstrated the promise of using resistive random access memory (ReRAM) to perform neural computations in memory. We found that training cannot be efficiently supported with the current schemes. First, they do not consider weight update and complex data dependency in training procedure. Second, ISAAC attempts to increase system throughput with a very deep pipeline. It is only beneficial when a large number of consecutive images can be fed into the architecture. In training, the notion of batch (e.g. 64) limits the number of images can be processed consecutively, because the images in the next batch need to be processed based on the updated weights. Third, the deep pipeline in ISAAC is vulnerable to pipeline bubbles and execution stall. In this paper, we present PipeLayer, a ReRAM-based PIM accelerator for CNNs that support both training and testing. We analyze data dependency and weight update in training algorithms and propose efficient pipeline to exploit inter-layer parallelism. To exploit intra-layer parallelism, we propose highly parallel design based on the notion of parallelism granularity and weight replication. With these design choices, PipeLayer enables the highly pipelined execution of both training and testing, without introducing the potential stalls in previous work. The experiment results show that, PipeLayer achieves the speedups of 42.45x compared with GPU platform on average. The average energy saving of PipeLayer compared with GPU implementation is 7.17x.

633 citations


Authors

Showing all 48605 results

NameH-indexPapersCitations
Michael Grätzel2481423303599
Yang Gao1682047146301
Gang Chen1673372149819
Chad A. Mirkin1641078134254
Hua Zhang1631503116769
Xiang Zhang1541733117576
Vivek Sharma1503030136228
Seeram Ramakrishna147155299284
Frede Blaabjerg1472161112017
Yi Yang143245692268
Joseph J.Y. Sung142124092035
Shi-Zhang Qiao14252380888
Paul M. Matthews14061788802
Bin Liu138218187085
George C. Schatz137115594910
Network Information
Related Institutions (5)
Hong Kong University of Science and Technology
52.4K papers, 1.9M citations

96% related

National University of Singapore
165.4K papers, 5.4M citations

96% related

Georgia Institute of Technology
119K papers, 4.6M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

95% related

Royal Institute of Technology
68.4K papers, 1.9M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023201
20221,324
20217,990
20208,387
20197,843
20187,247