scispace - formally typeset
Search or ask a question

Showing papers by "Aly A. Fahmy published in 2021"


Journal ArticleDOI
Omar Alfarghaly1, Rana Khaled1, Abeer El-Korany1, Maha Helal1, Aly A. Fahmy1 
TL;DR: This work is the first work to condition a pre-trained transformer on visual and semantic features to generate medical reports and to include semantic similarity metrics in the quantitative analysis of the generated reports.

55 citations


Journal ArticleDOI
TL;DR: DeepOnKHATT is an end-to-end AOHR based on bidirectional long short-term memory and the connectionist temporal classification that is capable of performing recognition at the sentence level in real-time and outperformed existing systems.
Abstract: The importance of online handwriting recognition technology has steadily increased in recent years. This importance stems from the rapid increase in the number of handheld devices with digital pens...

4 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this chapter, medical imaging modalities and histopathology are explained, and the best medical image type and size for classification and detection of medical diagnoses are explained.
Abstract: Nowadays, machine learning (ML) is one of the significant technology in the new era. It almost applies to all the fields in our lives. ML has a notable impact on medical care when dealing with medical images. Medical images are used to investigate the internal-parts of the body of human. The type and size of medical images vary from one scan to another. Besides, medical images are not like natural images. While natural images can perform object detection and classification easily, even if the images are re-sized to smaller images. Resizing medical images is not an efficient method because they interact with pixel-level. In this chapter, medical imaging modalities and histopathology are explained. Furthermore, the best medical image type and size for classification and detection of medical diagnoses are explained. Moreover, specific methods are considered in medical images such as image compression, image format, image resize, and other essential aspects. Finally, we also give a brief summary of deep learning algorithms that are used with medical images.

2 citations


Journal ArticleDOI
TL;DR: This article proposed self-training with unlabeled DA data and applied it in the context of named entity recognition (NER), POS tagging, and sarcasm detection (SRD) on several DA varieties.
Abstract: A reasonable amount of annotated data is required for fine-tuning pre-trained language models (PLM) on down-stream tasks. However, obtaining labeled examples for different language varieties can be costly. In this paper, we investigate the zero-shot performance on Dialectal Arabic (DA) when fine-tuning a PLM on modern standard Arabic (MSA) data only— identifying a significant performance drop when evaluating such models on DA. To remedy such performance drop, we propose self-training with unlabeled DA data and apply it in the context of named entity recognition (NER), part-of-speech (POS) tagging, and sarcasm detection (SRD) on several DA varieties. Our results demonstrate the effectiveness of self-training with unlabeled DA data: improving zero-shot MSA-to-DA transfer by as large as ~10% F₁ (NER), 2% accuracy (POS tagging), and 4.5% F₁ (SRD). We conduct an ablation experiment and show that the performance boost observed directly results from the unlabeled DA examples used for self-training. Our work opens up opportunities for leveraging the relatively abundant labeled MSA datasets to develop DA models for zero and low-resource dialects. We also report new state-of-the-art performance on all three tasks and open-source our fine-tuned models for the research community.

1 citations