Home BlogDataset Evaluating Language Models

Evaluating Language Models

by Morgan Stevens
by
Books

Meta and U.S.-based AI companies Abridge AI and Reka AI have created a dataset to improve multilingual language models. The dataset contains 900 multiple choice questions designed to test reading comprehension for 488 passages spanning 122 languages. Researchers can use the dataset to evaluate high-, medium-, and low-resource language models’ understanding of text.

Get the data.

Image credit: Flickr user Abhi Sharma

You may also like

Show Buttons
Hide Buttons