scispace - formally typeset
A

Ahmed Elnaggar

Researcher at Technische Universität München

Publications -  21
Citations -  1284

Ahmed Elnaggar is an academic researcher from Technische Universität München. The author has contributed to research in topics: Transfer of learning & Deep learning. The author has an hindex of 10, co-authored 21 publications receiving 428 citations. Previous affiliations of Ahmed Elnaggar include Modern Academy In Maadi.

Papers
More filters
Posted ContentDOI

ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Deep Learning and High Performance Computing

TL;DR: In this paper, the authors trained two auto-regressive language models (Transformer-XL and XLNet) on 80 billion amino acids from 200 million protein sequences (UniRef100) and one auto-encoder model on 393 billion amino acid from 2.1 billion protein sequences taken from the Big Fat Database (BFD).
Journal ArticleDOI

ProtTrans: Towards Cracking the Language of Lifes Code Through Self-Supervised Deep Learning and High Performance Computing

TL;DR: In this paper, the authors trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids.
Journal ArticleDOI

Modeling aspects of the language of life through transfer-learning protein sequences

TL;DR: Transfer-learning succeeded to extract information from unlabeled sequence databases relevant for various protein prediction tasks and modeled the language of life, namely the principles underlying protein sequences better than any features suggested by textbooks and prediction methods.
Posted ContentDOI

ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Learning

TL;DR: In this paper, the authors trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids.
Posted Content

ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing.

TL;DR: CodeTrans as discussed by the authors is an encoder-decoder transformer model for tasks in the software engineering domain, that explores the effectiveness of encoderdecoder transformers for six software engineering tasks, including thirteen sub-tasks.