scispace - formally typeset
Open AccessJournal ArticleDOI

Assessing the Quality of MT Systems for Hindi to English Translation

Reads0
Chats0
TLDR
This paper evaluated the translation quality of different MT engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc.
Abstract
Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given. General Terms Machine Translation, Natural Language Processing.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Review of Machine Translation Systems in India and different Translation Evaluation Methodologies

TL;DR: This paper gives a review of the work done on various Indian machine translation systems and existing methods for evaluating the translated MT system's Output.
Proceedings ArticleDOI

Reducing the Impact of Data Sparsity in Statistical Machine Translation

TL;DR: Two strategies for circumventing sparsity caused by lack of large parallel corpora are explored, including the use of distributed representations in an Recurrent Neural Network based language model with different morphological features and theUse of lexical resources such as WordNet to overcome sparsity of content words.
Journal ArticleDOI

Taylor-rider-based deep convolutional neural network for image forgery detection in 3D lighting environment

TL;DR: This paper proposes a Taylor-rider optimization algorithm-based deep convolutional neural network (Taylor-ROA-based DeepCNN) for detecting spliced images, developed by integrating the Taylor series in rider optimization algorithm (ROA) for optimally tuning the DeepCNN.
Proceedings ArticleDOI

Exploring System Combination approaches for Indo-Aryan MT Systems

TL;DR: This work used triangulation as a technique to improve the quality of translations in cases where the direct translation model did not perform satisfactorily, and obtained significant improvement in BLEU scores compared to the direct source-target models.
Journal ArticleDOI

Assessment of Multi-Engine Machine Translation for English to Hindi Language MEMTEHiL: Using F&A and iBLEU Metrics

TL;DR: A Multi-Engine Machine Translation for English to Hindi Language MEMTEHiL framework has been designed and integrated by the authors as a translation solution for the computer science domain e-content by enabling the use of well-tested approaches of machine translation.
References
More filters
Proceedings ArticleDOI

Bleu: a Method for Automatic Evaluation of Machine Translation

TL;DR: This paper proposed a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
Proceedings Article

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

TL;DR: METEOR is described, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations and can be easily extended to include more advanced matching strategies.
Proceedings ArticleDOI

Automatic evaluation of machine translation quality using n-gram co-occurrence statistics

TL;DR: NIST commissioned NIST to develop an MT evaluation facility based on the IBM work, which is now available from NIST and serves as the primary evaluation measure for TIDES MT research.
Proceedings ArticleDOI

Manual and Automatic Evaluation of Machine Translation between European Languages

TL;DR: This work evaluated machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
Book ChapterDOI

The significance of recall in automatic metrics for MT evaluation

TL;DR: This work shows that correlation with human judgments is highest when almost all of the weight is assigned to recall, and shows that stemming is significantly beneficial not just to simpler unigram precision and recall based metrics, but also to BLEU and NIST.
Related Papers (5)