Open Access   Article Go Back

A Study of metrics for evaluation of Machine translation

K. Sourabh1 , S. M Aaqib2 , V. Mansotra3

Section:Research Paper, Product Type: Journal Paper
Volume-06 , Issue-05 , Page no. 1-4, Jun-2018

Online published on Jun 30, 2018

Copyright © K. Sourabh, S. M Aaqib, V. Mansotra . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: K. Sourabh, S. M Aaqib, V. Mansotra, “A Study of metrics for evaluation of Machine translation,” International Journal of Computer Sciences and Engineering, Vol.06, Issue.05, pp.1-4, 2018.

MLA Style Citation: K. Sourabh, S. M Aaqib, V. Mansotra "A Study of metrics for evaluation of Machine translation." International Journal of Computer Sciences and Engineering 06.05 (2018): 1-4.

APA Style Citation: K. Sourabh, S. M Aaqib, V. Mansotra, (2018). A Study of metrics for evaluation of Machine translation. International Journal of Computer Sciences and Engineering, 06(05), 1-4.

BibTex Style Citation:
@article{Sourabh_2018,
author = {K. Sourabh, S. M Aaqib, V. Mansotra},
title = {A Study of metrics for evaluation of Machine translation},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {6 2018},
volume = {06},
Issue = {05},
month = {6},
year = {2018},
issn = {2347-2693},
pages = {1-4},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=411},
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=411
TI - A Study of metrics for evaluation of Machine translation
T2 - International Journal of Computer Sciences and Engineering
AU - K. Sourabh, S. M Aaqib, V. Mansotra
PY - 2018
DA - 2018/06/30
PB - IJCSE, Indore, INDIA
SP - 1-4
IS - 05
VL - 06
SN - 2347-2693
ER -

           

Abstract

Machine Translation has gained popularity over the years and has become one of the promising areas of research in computer science. Due to a consistent growth of internet users across the world information is now more versatile and dynamic available in almost all popular spoken languages throughout the world. From Indian perspective importance of machine translation become very obvious because Hindi is a language that is widely used across India and whole world. Many initiatives have been taken to facilitate Indian users so that information may be accessed in Hindi by converting it from one language to other. In this paper we have studied various available automatic metrics that evaluate the quality of translation correlation with human judgments.

Key-Words / Index Term

Machine Translation, Corpus, bleu, Nist, Meteor, wer, ter, gtm

References

[1] Philipp Koehn, Christof Monz ,”Manual and Automatic Evaluation of Machine Translation between European Languages” School of Informatics University of Edinburgh ,Department of Computer Science Queen Mary, University of London. Proceeding StatMT `06 Proceedings of the Workshop on Statistical Machine Translation Pages 102-121
[2] Michael Denkowski and Alon Lavie Language ,”Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks” Proceedings of the Ninth Biennial Conference of the Association for Machine Translation in the Americas https://www.cs.cmu.edu/~mdenkows/pdf/mteval-amta-2010.pdf
[3] Matthew Snover Bonnie Dorr Richard Schwartz, Linnea Micciulla, and John Makhoul, “A Study of Translation Edit Rate with Targeted Human Annotation” Proceedings of association for machine translation in the Americas, pp 223-231.
[4] Aditi Kalyani, Hemant Kumud Shashi Pal Singh Ajai Kumar,” Assessing the Quality of MT Systems for Hindi to English Translation” International Journal of Computer Applications (0975 – 8887) Volume 89 – No 15, March 2014
[5] Klakow, Dietrich; Jochen Peters (September 2002). "Testing the correlation of word error rate and perplexity". Speech Communication. 38 (1-2): 19–28. doi:10.1016/S0167-6393(01)00041-3. ISSN 0167-6393
[6] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu,” BLEU: a Method for Automatic Evaluation of Machine Translation “. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, July 2002, pp. 311-318
[7] Jason Brownlee “A Gentle Introduction to Calculating the BLEU Score for Text in Python “.November 20, 2017 in Natural Language Processing” Online https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
[8] Xingyi Song and Trevor Cohn and Lucia Specia,” BLEU deconstructed: Designing a Better MT Evaluation Metric” University of Sheffield Department of Computer Science Proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics (CICLING)
[9] Doddington, George. (2002),”Automatic evaluation of machine translation quality using n-gram co-occurrence statistics”.138-145. 10.3115/1289189.1289273
[10] Satanjeev Banerjee Alon Lavie ,”METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments” Institute Language Technologies Institute Carnegie Mellon University. Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan, June 2005.
[11] Ankush Gupta and Sriram Venkatapathy and Rajeev Sangal ,” METEOR-Hindi : Automatic MT Evaluation Metric for Hindi as a Target Language”. Language Technologies Research Centre, IIIT-Hyderabad, Hyderabad, India. Proceedings of ICON-2010:8th International conference on Natural language processing, Macmillan Publishers, India.
[12] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick ,”Microsoft COCO Captions: Data Collection and Evaluation Server”. CoRR 2015 Vol: abs/1504.00325