Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets


With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50% sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.

In Transactions of the Association for Computational Linguistics
Pedro Ortiz Suarez
Pedro Ortiz Suarez
Wissenschaftlicher Mitarbeiter

Ich bin wissenschaftlicher Mitarbeiter im Team Speech and Language Technology der DFKI GmbH Berlin.