scispace - formally typeset
A

Arash Ardakani

Researcher at McGill University

Publications -  31
Citations -  653

Arash Ardakani is an academic researcher from McGill University. The author has contributed to research in topics: Stochastic computing & Artificial neural network. The author has an hindex of 11, co-authored 28 publications receiving 501 citations. Previous affiliations of Arash Ardakani include Sharif University of Technology.

Papers
More filters
Journal ArticleDOI

VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing

TL;DR: The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastically architectures for DNN.
Proceedings ArticleDOI

VLSI implementation of deep neural networks using integral stochastic computing

TL;DR: This paper proposes an integer form of stochastic computation and introduces some elementary circuits and proposes an efficient implementation of a DNN based on integral SC, and considers a quasi-synchronous implementation that yields 33% reduction in energy consumption with respect to the binary radix implementation without any compromise on performance.
Journal ArticleDOI

An Architecture to Accelerate Convolution in Deep Neural Networks

TL;DR: This paper proposes an efficient computational method, which is inspired by a computational core of fully connected neural networks, to process convolutional layers of state-of-the-art deep CNNs within strict latency requirements, and implemented its method customized for VGG and VGG-based networks which have shown state of theart performance on different classification/recognition data sets.
Journal ArticleDOI

Fast and Efficient Convolutional Accelerator for Edge Computing

TL;DR: ZASCA achieves a performance efficiency of up to 94 percent over a set of state-of-the-art CNNs for image classification with dense representation where the performance efficiency is the ratio between the average runtime performance and the peak performance.
Proceedings Article

Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks

TL;DR: In this article, the authors proposed sparsely-connected networks to reduce the number of connections in fully-connected neural networks by up to 90% while improving the accuracy performance on three popular datasets (MNIST, CIFAR10 and SVHN).