scispace - formally typeset
Proceedings ArticleDOI

Weight discretization paradigm for optical neural networks

Reads0
Chats0
TLDR
In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretized levels.
Abstract
Neural networks are a primary candidate architecture for optical computing. One of the major problems in using neural networks for optical computers is that the information holders: the interconnection strengths (or weights) are normally real valued (continuous), whereas optics (light) is only capable of representing a few distinguishable intensity levels (discrete). In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretization levels. The number of interconnections in a (fully connected) neural network grows quadratically with the number of neurons of the network. Optics can handle a large number of interconnections because of the fact that light beams do not interfere with each other. A vast amount of light beams can therefore be used per unit of area. However the number of different values one can represent in a light beam is very limited. A flexible, portable (machine independent) neural network software package which is capable of weight discretization, is presented. The development of the software and some experiments have been done on personal computers. The major part of the testing, which requires a lot of computation, has been done using a CRAY X-MP/24 super computer.

read more

Citations
More filters
Proceedings ArticleDOI

Fixed-point feedforward deep neural network design using weights +1, 0, and −1

TL;DR: The designed fixed-point networks with ternary weights (+1, 0, and -1) and 3-bit signal show only negligible performance loss when compared to the floating-point coun-terparts.
Proceedings Article

Backpropagation for energy-efficient neuromorphic computing

TL;DR: This work treats spikes and discrete synapses as continuous probabilities, which allows training the network using standard backpropagation and naturally maps to neuromorphic hardware by sampling the probabilities to create one or more networks, which are merged using ensemble averaging.
Journal ArticleDOI

Pruning and quantization for deep neural network acceleration: A survey

TL;DR: A survey on two types of network compression: pruning and quantization is provided, which compare current techniques, analyze their strengths and weaknesses, provide guidance for compressing networks, and discuss possible future compression techniques.
Journal ArticleDOI

A comprehensive survey on model compression and acceleration

TL;DR: A survey of various techniques suggested for compressing and accelerating the ML and DL models is presented and the challenges of the existing techniques are discussed and future research directions in the field are provided.
Proceedings Article

Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM

TL;DR: This paper focuses on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network, and proposes to solve this problem using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods.
References
More filters
Journal ArticleDOI

Neural networks and physical systems with emergent collective computational abilities

TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Journal ArticleDOI

Neurocomputing: picking the human brain

TL;DR: The operation of a neural network is described, and its hardware realization is considered, and some applications of neural networks are examined.

Optical analog of two-dimensional neural networks and their application in recognition of radar targets

TL;DR: Optical analogs of 2−D distribution of idealized neurons (2−D neural net) based on partitioning of the resulting 4−D connectivity matrix are discussed and super‐resolved recognition from partial information that can be as low as 20% of the sinogram data is demonstrated.
Related Papers (5)