Evaluation code for various unsupervised automated metrics for Natural Language Generation.
-
Updated
Mar 15, 2024 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
Well tested & Multi-language evaluation framework for text summarization.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Code for paper title "Learning Semantic Sentence Embeddings using Pair-wise Discriminator" COLING-2018
A python3 library for evaluating caption's BLEU, Meteor, CIDEr, SPICE,ROUGE_L,WMD score. Fork from https://github.com/ruotianluo/coco-caption
Machine Translation (MT) Evaluation Scripts
MAchine Translation Evaluation Online (MATEO)
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
An effective and simple tool to calculate SacreBLEU, Token-BLEU, BLEU w/ compound splitting for fairseq
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation
A data driven query expansion approach for image caption, implemented in cpp
Automatic text metrics (BLEU, ROUGE, METEOR, +++)
Corpus level and sentence level BLEU calculation for machine translation
Image caption generator is a task that involves computer vision and natural language processing concepts to recognize the context of an image and describe them in a natural language like English.
BLEU Score in Rust
Add a description, image, and links to the bleu topic page so that developers can more easily learn about it.
To associate your repository with the bleu topic, visit your repo's landing page and select "manage topics."