scispace - formally typeset
Search or ask a question
Author

Muhammad Abdullah Hanif

Bio: Muhammad Abdullah Hanif is an academic researcher from Vienna University of Technology. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 18, co-authored 88 publications receiving 860 citations. Previous affiliations of Muhammad Abdullah Hanif include University of the Sciences & Brno University of Technology.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
19 Mar 2018
TL;DR: An overview of the current and emerging trends in designing highly efficient, reliable, secure and scalable machine learning architectures for IoT devices and presents a roadmap that can help in addressing the highlighted challenges and thereby designing scalable, high-performance, and energy efficient architectures for performing machine learning on the edge.
Abstract: The number of connected Internet of Things (IoT) devices are expected to reach over 20 billion by 2020. These range from basic sensor nodes that log and report the data to the ones that are capable of processing the incoming information and taking an action accordingly. Machine learning, and in particular deep learning, is the de facto processing paradigm for intelligently processing these immense volumes of data. However, the resource inhibited environment of IoT devices, owing to their limited energy budget and low compute capabilities, render them a challenging platform for deployment of desired data analytics. This paper provides an overview of the current and emerging trends in designing highly efficient, reliable, secure and scalable machine learning architectures for such devices. The paper highlights the focal challenges and obstacles being faced by the community in achieving its desired goals. The paper further presents a roadmap that can help in addressing the highlighted challenges and thereby designing scalable, high-performance, and energy efficient architectures for performing machine learning on the edge.

98 citations

Proceedings ArticleDOI
11 Jun 2019
TL;DR: It is demonstrated that efficient approximations can be introduced into the computational path of DNN accelerators while retraining can completely be avoided, and a simple weight updating scheme is proposed that compensates the inaccuracy introduced by employing approximate multipliers.
Abstract: The state-of-the-art approaches employ approximate computing to reduce the energy consumption of DNN hardware. Approximate DNNs then require extensive retraining afterwards to recover from the accuracy loss caused by the use of approximate operations. However, retraining of complex DNNs does not scale well. In this paper, we demonstrate that efficient approximations can be introduced into the computational path of DNN accelerators while retraining can completely be avoided. ALWANN provides highly optimized implementations of DNNs for custom low-power accelerators in which the number of computing units is lower than the number of DNN layers. First, a fully trained DNN (e.g., in TensorFlow) is converted to operate with 8-bit weights and 8-bit multipliers in convolutional layers. A suitable approximate multiplier is then selected for each computing element from a library of approximate multipliers in such a way that (i) one approximate multiplier serves several layers, and (ii) the overall classification error and energy consumption are minimized. The optimizations including the multiplier selection problem are solved by means of a multiobjective optimization NSGA-II algorithm. In order to completely avoid the computationally expensive retraining of DNN, which is usually employed to improve the classification accuracy, we propose a simple weight updating scheme that compensates the inaccuracy introduced by employing approximate multipliers. The proposed approach is evaluated for two architectures of DNN accelerators with approximate multipliers from the open-source “EvoApprox” library, while executing three versions of ResNet on CIFAR-10. We report that the proposed approach saves 30% of energy needed for multiplication in convolutional layers of ResNet-50 while the accuracy is degraded by only 0.6% (0.9% for the ResNet-14). The proposed technique and approximate layers are available as an open-source extension of TensorFlow at https://github.com/ehw-fit/tf-approximate.

88 citations

Proceedings ArticleDOI
15 Jul 2019
TL;DR: The current trends of such optimizations for deep learning have to be performed at both software and hardware levels are surveyed and key open research mid-term and long-term challenges are discussed.
Abstract: In the Machine Learning era, Deep Neural Networks (DNNs) have taken the spotlight, due to their unmatchable performance in several applications, such as image processing, computer vision, and natural language processing. However, as DNNs grow in their complexity, their associated energy consumption becomes a challenging problem. Such challenge heightens for edge computing, where the computing devices are resource-constrained while operating on limited energy budget. Therefore, specialized optimizations for deep learning have to be performed at both software and hardware levels. In this paper, we comprehensively survey the current trends of such optimizations and discuss key open research mid-term and long-term challenges.

81 citations

Proceedings ArticleDOI
09 Mar 2020
TL;DR: In this paper, an error resilience analysis of DNNs subjected to hardware faults (e.g., permanent faults) in the weight memory is performed, which is leveraged to propose a novel error mitigation technique which squashes the high-intensity faulty activation values to alleviate their impact.
Abstract: Deep Neural Networks (DNNs) are widely being adopted for safety-critical applications, eg, healthcare and autonomous driving Inherently, they are considered to be highly error-tolerant However, recent studies have shown that hardware faults that impact the parameters of a DNN (eg, weights) can have drastic impacts on its classification accuracy In this paper, we perform a comprehensive error resilience analysis of DNNs subjected to hardware faults (eg, permanent faults) in the weight memory The outcome of this analysis is leveraged to propose a novel error mitigation technique which squashes the high- intensity faulty activation values to alleviate their impact We achieve this by replacing the unbounded activation functions with their clipped versions We also present a method to systematically define the clipping values of the activation functions that result in increased resilience of the networks against faults We evaluate our technique on the AlexNet and the VGG-16 DNNs trained for the CIFAR-10 dataset The experimental results show that our mitigation technique significantly improves the resilience of the DNNs to faults For example, the proposed technique offers on average 6892% improvement in the classification accuracy of resilience-optimized VGG-16 model at 1 × 10-5 fault rate, when compared to the base network without any fault mitigation

71 citations

Proceedings ArticleDOI
02 Jul 2018
TL;DR: An overview of the challenges being faced in ensuring reliable and secure execution of DNNs is provided and several techniques for analyzing and mitigating the reliability and security threats in machine learning systems are presented.
Abstract: Machine learning is commonly being used in almost all the areas that involve advanced data analytics and intelligent control. From applications like Natural Language Processing (NLP) to autonomous driving are based upon machine learning algorithms. An increasing trend is observed in the use of Deep Neural Networks (DNNs) for such applications. While the slight inaccuracy in applications like NLP does not have any severe consequences, it is not the same for other safety-critical applications, like autonomous driving and smart healthcare, where a small error can lead to catastrophic effects. Apart from high-accuracy DNN algorithms, there is a significant need for robust machine learning systems and hardware architectures that can generate reliable and trustworthy results in the presence of hardware-level faults while also preserving security and privacy. This paper provides an overview of the challenges being faced in ensuring reliable and secure execution of DNNs. To address the challenges, we present several techniques for analyzing and mitigating the reliability and security threats in machine learning systems.

58 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The role of ML in IoT from the cloud down to embedded devices is reviewed and the state-of-the-art usages are categorized according to their application domain, input data type, exploited ML techniques, and where they belong in the cloud-to-things continuum.
Abstract: With the numerous Internet of Things (IoT) devices, the cloud-centric data processing fails to meet the requirement of all IoT applications. The limited computation and communication capacity of the cloud necessitate the edge computing, i.e., starting the IoT data processing at the edge and transforming the connected devices to intelligent devices . Machine learning (ML) the key means for information inference, should extend to the cloud-to-things continuum too. This paper reviews the role of ML in IoT from the cloud down to embedded devices. Different usages of ML for application data processing and management tasks are studied. The state-of-the-art usages of ML in IoT are categorized according to their application domain, input data type, exploited ML techniques, and where they belong in the cloud-to-things continuum. The challenges and research trends toward efficient ML on the IoT edge are discussed. Moreover, the publications on the “ML in IoT” are retrieved and analyzed systematically using ML classification techniques. Then, the growing topics and application domains are identified.

157 citations

Journal ArticleDOI
12 Aug 2020
TL;DR: A comprehensive survey and a comparative evaluation of recently developed approximate arithmetic circuits under different design constraints, synthesized and characterized under optimizations for performance and area.
Abstract: Approximate computing has emerged as a new paradigm for high-performance and energy-efficient design of circuits and systems. For the many approximate arithmetic circuits proposed, it has become critical to understand a design or approximation technique for a specific application to improve performance and energy efficiency with a minimal loss in accuracy. This article aims to provide a comprehensive survey and a comparative evaluation of recently developed approximate arithmetic circuits under different design constraints. Specifically, approximate adders, multipliers, and dividers are synthesized and characterized under optimizations for performance and area. The error and circuit characteristics are then generalized for different classes of designs. The applications of these circuits in image processing and deep neural networks indicate that the circuits with lower error rates or error biases perform better in simple computations, such as the sum of products, whereas more complex accumulative computations that involve multiple matrix multiplications and convolutions are vulnerable to single-sided errors that lead to a large error bias in the computed result. Such complex computations are more sensitive to errors in addition than those in multiplication, so a larger approximation can be tolerated in multipliers than in adders. The use of approximate arithmetic circuits can improve the quality of image processing and deep learning in addition to the benefits in performance and power consumption for these applications.

143 citations

Posted Content
TL;DR: In this article, a back-gradient optimization algorithm is proposed to compute the gradient of interest through automatic gradient extraction, while also reversing the learning procedure to drastically reduce the attack complexity, which is able to target a wider class of learning algorithms, trained with gradient-based procedures.
Abstract: A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples). In this work, we rst extend the de nition of poisoning attacks to multiclass problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic di erentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient- based procedures, including neural networks and deep learning architectures. We empirically evaluate its e ectiveness on several application examples, including spam ltering, malware detection, and handwritten digit recognition. We nally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across di erent learning algorithms.

138 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the state of the art architectures, tools and methodologies in existing implementations of capsule networks highlights the successes, failures and opportunities for further research to serve as a motivation to researchers and industry players to exploit the full potential of this new field.

135 citations

Journal ArticleDOI
TL;DR: Different IoT-based machine learning mechanisms that are used in the mentioned fields among others are studied and the lessons learned are reported and the assessments are explored viewing the basic aim machine learning techniques are expected to play in IoT networks.

129 citations