scispace - formally typeset
Open AccessPosted Content

Evaluation and Ranking of Machine Translated Output in Hindi Language using Precision and Recall Oriented Metrics

Reads0
Chats0
TLDR
The implementation results of different metrics when used on Hindi language along with their comparisons are presented, illustrating how effective are these metrics on languages like Hindi.
Abstract
Evaluation plays a crucial role in development of Machine translation systems. In order to judge the quality of an existing MT system i.e. if the translated output is of human translation quality or not, various automatic metrics exist. We here present the implementation results of different metrics when used on Hindi language along with their comparisons, illustrating how effective are these metrics on languages like Hindi (free word order language).

read more

Citations
More filters
Proceedings ArticleDOI

Evaluation of English to Arabic Machine Translation Systems using BLEU and GTM

TL;DR: The results of this research study have revealed that Golden Alwafi achieves highest accuracy using BLEU and Google Translator attains highest accuracy with GTM method.
Journal ArticleDOI

A Review of Machine Translation Systems in India and different Translation Evaluation Methodologies

TL;DR: This paper gives a review of the work done on various Indian machine translation systems and existing methods for evaluating the translated MT system's Output.
Journal ArticleDOI

Machine translation: a critical look at the performance of rule-based and statistical machine translation

TL;DR: The German translations of Mark Twain’s The Awful German Language translated by Systran and Google Translate are being critically evaluated highlighting some of the linguistic challenges faced by each translation system.
Proceedings ArticleDOI

Improving the quality of Machine Translation using rule based tense synthesizer for Hindi

TL;DR: This work proposes solution to build a rule based tense synthesizer that would recognise the subject, verb and auxiliary verb, analyse the tense, then modify the verb and Auxiliary verb according to the subject and put the sentence in the correct tense.
References
More filters
Proceedings ArticleDOI

Bleu: a Method for Automatic Evaluation of Machine Translation

TL;DR: This paper proposed a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
Proceedings Article

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

TL;DR: METEOR is described, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations and can be easily extended to include more advanced matching strategies.
Proceedings ArticleDOI

Automatic evaluation of machine translation quality using n-gram co-occurrence statistics

TL;DR: NIST commissioned NIST to develop an MT evaluation facility based on the IBM work, which is now available from NIST and serves as the primary evaluation measure for TIDES MT research.
Proceedings ArticleDOI

Manual and Automatic Evaluation of Machine Translation between European Languages

TL;DR: This work evaluated machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
Book ChapterDOI

The significance of recall in automatic metrics for MT evaluation

TL;DR: This work shows that correlation with human judgments is highest when almost all of the weight is assigned to recall, and shows that stemming is significantly beneficial not just to simpler unigram precision and recall based metrics, but also to BLEU and NIST.
Related Papers (5)