Journal ArticleDOI
Towards spike-based machine intelligence with neuromorphic computing.
TLDR
An overview of the developments in neuromorphic computing for both algorithms and hardware is provided and the fundamentals of learning and hardware frameworks are highlighted, with emphasis on algorithm–hardware codesign.Abstract:
Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign. The authors review the advantages and future prospects of neuromorphic computing, a multidisciplinary engineering concept for energy-efficient artificial intelligence with brain-inspired functionality.read more
Citations
More filters
Journal ArticleDOI
Neuro-inspired computing chips
Wenqiang Zhang,Bin Gao,Jianshi Tang,Peng Yao,Shimeng Yu,Meng-Fan Chang,Hoi-Jun Yoo,He Qian,Huaqiang Wu +8 more
TL;DR: The development of neuro-inspired computing chips and their key benchmarking metrics are reviewed, providing a co-design tool chain and proposing a roadmap for future large-scale chips are provided and a future electronic design automation tool chain is proposed.
Posted Content
Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks
TL;DR: It is shown that incorporating learnable membrane time constants can make the network less sensitive to initial values and can speed up learning, and reevaluate the pooling methods in SNNs and find that max-pooling will not lead to significant information loss and have the advantage of low computation cost and binary compatibility.
Posted Content
Going Deeper With Directly-Trained Larger Spiking Neural Networks
TL;DR: A threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware is proposed.
Journal ArticleDOI
Recent Advances on Neuromorphic Devices Based on Chalcogenide Phase-Change Materials
Ming Xu,Xianliang Mai,Jun Lin,Wei Zhang,Yi Li,Yuhui He,Hao Tong,Xiang Hou,Peng Zhou,Xiangshui Miao +9 more
Journal ArticleDOI
Curved neuromorphic image sensor array using a MoS2-organic heterostructure inspired by the human visual recognition system.
Changsoon Choi,Juyoung Leem,Min Sung Kim,Amir Taqieddin,Chullhee Cho,Kyoung Won Cho,Gil Ju Lee,Hyojin Seung,Hyung Jong Bae,Young Min Song,Taeghwan Hyeon,Narayana R. Aluru,SungWoo Nam,Dae-Hyeong Kim +13 more
TL;DR: The curved neuromorphic image sensor array integrated with a plano-convex lens derives a pre-processed image from a set of noisy optical inputs without redundant data storage, processing, and communications as well as without complex optics.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI
A logical calculus of the ideas immanent in nervous activity
Warren S. McCulloch,Walter Pitts +1 more
TL;DR: In this article, it is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under another and gives the same results, although perhaps not in the same time.
Proceedings Article
Rectified Linear Units Improve Restricted Boltzmann Machines
Vinod Nair,Geoffrey E. Hinton +1 more
TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
A million spiking-neuron integrated circuit with a scalable communication network and interface
Paul A. Merolla,John V. Arthur,Rodrigo Alvarez-Icaza,Andrew S. Cassidy,Jun Sawada,Filipp Akopyan,Bryan L. Jackson,Nabil Imam,Chen Guo,Yutaka Nakamura,Bernard Brezzo,Ivan Vo,Steven K. Esser,Rathinakumar Appuswamy,Brian Taba,Arnon Amir,Myron D. Flickner,William P. Risk,Rajit Manohar,Dharmendra S. Modha +19 more