🚀 MARBERT模型
MARBERT是一个专注于方言阿拉伯语(DA)和现代标准阿拉伯语(MSA)的大规模预训练掩码语言模型。它能有效处理阿拉伯语的多种变体,为阿拉伯语自然语言处理任务提供强大支持。
✨ 主要特性
- 多语言支持:支持阿拉伯语,涵盖方言阿拉伯语和现代标准阿拉伯语。
- 大规模预训练:基于约128GB文本(156亿个标记)的大规模数据集进行预训练。
- 架构优化:采用与ARBERT(BERT - base)相同的网络架构,但去除了下一句预测(NSP)目标,以适应推文的短文本特性。
📚 详细文档
模型概述
MARBERT 是我们在 ACL 2021论文 "ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic" 中描述的三个模型之一。阿拉伯语有多种变体,为了训练MARBERT,我们从一个约60亿条推文的大型内部数据集中随机抽取了10亿条阿拉伯语推文。我们仅纳入至少包含3个阿拉伯语单词的推文(基于字符串匹配),无论推文中是否包含非阿拉伯语字符串。也就是说,只要推文满足3个阿拉伯语单词的标准,我们就不会去除非阿拉伯语内容。该数据集构成了 128GB的文本(156亿个标记)。我们使用与ARBERT(BERT - base)相同的网络架构,但由于推文较短,去除了下一句预测(NSP)目标。有关修改BERT代码以去除NSP的详细信息,请参阅我们的 仓库。如需了解更多关于MARBERT的信息,请访问我们的GitHub 仓库。
模型信息
属性 |
详情 |
模型类型 |
大规模预训练掩码语言模型 |
训练数据 |
从约60亿条推文的大型内部数据集中随机抽取的10亿条阿拉伯语推文,构成128GB文本(156亿个标记) |
BibTex引用
如果您在科学出版物中使用我们的模型(ARBERT、MARBERT或MARBERTv2),或者发现本仓库中的资源有用,请按以下方式引用我们的论文(待更新):
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
🔗 致谢
我们衷心感谢加拿大自然科学与工程研究委员会、加拿大社会科学与人文研究委员会、加拿大创新基金会、ComputeCanada 和 UBC ARC - Sockeye 的支持。我们也感谢 Google TensorFlow Research Cloud (TFRC) 计划为我们提供免费的TPU访问权限。