scispace - formally typeset
K

Khalil Mrini

Researcher at University of California, San Diego

Publications -  24
Citations -  171

Khalil Mrini is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Computer science & Sentence. The author has an hindex of 4, co-authored 18 publications receiving 93 citations. Previous affiliations of Khalil Mrini include École Polytechnique Fédérale de Lausanne.

Papers
More filters
Proceedings ArticleDOI

Rethinking Self-Attention: Towards Interpretability in Neural Parsing.

TL;DR: The Label Attention Layer is introduced: a new form of self-attention where attention heads represent labels and the Label Attention heads learn relations between syntactic categories and show pathways to analyze errors.
Posted Content

Rethinking Self-Attention: An Interpretable Self-Attentive Encoder-Decoder Parser.

TL;DR: The Label Attention Layer is introduced: a new form of self-attention where attention heads represent labels in NLP models, hypothesizing that model representations can benefit from label-specific information, while facilitating interpretation of predictions.
Proceedings ArticleDOI

Bringing letters to life: handwriting with haptic-enabled tangible robots

TL;DR: Results show a clear potential of the robot-assisted learning activities, with a visible improvement in certain skills of handwriting, most notably in creating the ductus of the letters, discriminating a letter among others and in the average handwriting speed.
Proceedings ArticleDOI

Joint Summarization-Entailment Optimization for Consumer Health Question Understanding

TL;DR: A novel data-augmented and simple joint learning approach combining question summarization and Recognizing Question Entailment in the medical domain is proposed and improves on both tasks across four biomedical datasets in accuracy, ROUGE-1 and human evaluation scores.
Posted Content

Rethinking Self-Attention: Towards Interpretability in Neural Parsing.

TL;DR: Label Attention Layer as mentioned in this paper is a new form of self-attention where attention heads represent labels, which has been shown to benefit from label-specific information, while facilitating interpretation of predictions.