scispace - formally typeset
P

Pamela Shapiro

Researcher at Johns Hopkins University

Publications -  11
Citations -  224

Pamela Shapiro is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Machine translation & Convolutional neural network. The author has an hindex of 6, co-authored 11 publications receiving 179 citations.

Papers
More filters
Proceedings ArticleDOI

Curriculum Learning for Domain Adaptation in Neural Machine Translation

TL;DR: This article introduced a curriculum learning approach to adapt generic NMT models to a specific domain, where samples are grouped by their similarities to the domain of interest and each group is fed to the training algorithm with a particular schedule.
Posted Content

Curriculum Learning for Domain Adaptation in Neural Machine Translation

TL;DR: This work introduces a curriculum learning approach to adapt generic neural machine translation models to a specific domain and consistently outperforms both unadapted and adapted baselines in experiments with two distinct domains and two language pairs.
Proceedings ArticleDOI

Hard Non-Monotonic Attention for Character-Level Transduction

TL;DR: An exact, polynomial-time algorithm for marginalizing over the exponential number of non-monotonic alignments between two strings is introduced, showing that hard attention models can be viewed as neural reparameterizations of the classical IBM Model 1.
Proceedings ArticleDOI

Morphological Word Embeddings for Arabic Neural Machine Translation in Low-Resource Settings

Pamela Shapiro, +1 more
TL;DR: It is found that word embeddings utilizing subword information consistently outperform standard word embedDings on a word similarity task and as initialization of the source word embeddeddings in a low-resource NMT system.
Posted Content

BPE and CharCNNs for Translation of Morphology: A Cross-Lingual Comparison and Analysis

TL;DR: This work argues for a reconsideration of the charCNN, based on cross-lingual improvements on low-resource data, and finds that in most cases, using both BPE and a charCNN performs best, while in Hebrew, using acharCNN over words is best.