scispace - formally typeset
A

Ahmed Osman

Researcher at Heinrich Hertz Institute

Publications -  18
Citations -  270

Ahmed Osman is an academic researcher from Heinrich Hertz Institute. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 7, co-authored 10 publications receiving 139 citations. Previous affiliations of Ahmed Osman include Fraunhofer Society.

Papers
More filters
Proceedings ArticleDOI

Evaluating Recurrent Neural Network Explanations

TL;DR: Using the method that performed best in the authors' experiments, it is shown how specific linguistic phenomena such as the negation in sentiment analysis reflect in terms of relevance patterns, and how the relevance visualization can help to understand the misclassification of individual samples.
Posted Content

DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

TL;DR: A universal compression algorithm for DNNs that is based on applying Context-based Adaptive Binary Arithmetic Coder (CABAC) to the DNN parameters, which applies a novel quantization scheme that minimizes a rate-distortion function while simultaneously taking the impact of quantization to theDNN performance into account.
Posted Content

Evaluating Recurrent Neural Network Explanations

TL;DR: In this article, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs, by assigning to each input variable, e.g., a word, a relevance indicating to which extent it contributed to a particular prediction.
Journal ArticleDOI

DRAU: Dual Recurrent Attention Units for Visual Question Answering

TL;DR: This paper proposes a recurrent attention mechanism for VQA which utilizes dual (textual and visual) Recurrent Attention Units (RAUs) and shows the effect of all possible combinations of recurrent and convolutional dual attention.
Journal ArticleDOI

CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations

TL;DR: In this paper, Magdiosman et al. proposed a ground truth based evaluation framework for explainable AI (XAI) methods based on the CLEVR visual question answering task, which provides a selective, controlled and realistic testbed for the evaluation of neural network explanations.