模型简介
模型特点
模型能力
使用案例
🚀 SMALL-100模型
SMaLL-100是一个紧凑且快速的大规模多语言机器翻译模型,涵盖了超过10000种语言对。它在规模更小、速度更快的情况下,取得了与M2M-100相媲美的效果。该模型在这篇论文(已被EMNLP2022接收)中被提出,并最初发布于这个仓库。
🚀 快速开始
SMaLL-100模型架构和配置与M2M-100实现相同,但分词器进行了修改以调整语言代码。因此,目前你应该从tokenization_small100.py文件中本地加载分词器。
演示地址:https://huggingface.co/spaces/alirezamsh/small100
⚠️ 重要提示
SMALL100Tokenizer需要sentencepiece,请确保通过以下命令安装:
pip install sentencepiece
✨ 主要特性
- 紧凑快速:模型规模小,推理速度快。
- 多语言支持:涵盖超过10000种语言对。
- 效果媲美:在性能上与M2M-100相竞争。
📦 安装指南
安装sentencepiece:
pip install sentencepiece
💻 使用示例
基础用法
监督训练示例
from transformers import M2M100ForConditionalGeneration
from tokenization_small100 import SMALL100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass
训练数据可根据需求提供。
生成示例
生成时使用的束搜索大小为5,最大目标长度为256。
from transformers import M2M100ForConditionalGeneration
from tokenization_small100 import SMALL100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100")
# 印地语到法语翻译
tokenizer.tgt_lang = "fr"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# 中文到英语翻译
tokenizer.tgt_lang = "en"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
📚 详细文档
评估
请参考原始仓库进行spBLEU计算。
支持语言
该模型支持以下语言: 南非荷兰语 (af)、阿姆哈拉语 (am)、阿拉伯语 (ar)、阿斯图里亚斯语 (ast)、阿塞拜疆语 (az)、巴什基尔语 (ba)、白俄罗斯语 (be)、保加利亚语 (bg)、孟加拉语 (bn)、布列塔尼语 (br)、波斯尼亚语 (bs)、加泰罗尼亚语; 瓦伦西亚语 (ca)、宿务语 (ceb)、捷克语 (cs)、威尔士语 (cy)、丹麦语 (da)、德语 (de)、希腊语 (el)、英语 (en)、西班牙语 (es)、爱沙尼亚语 (et)、波斯语 (fa)、富拉语 (ff)、芬兰语 (fi)、法语 (fr)、西弗里斯兰语 (fy)、爱尔兰语 (ga)、盖尔语; 苏格兰盖尔语 (gd)、加利西亚语 (gl)、古吉拉特语 (gu)、豪萨语 (ha)、希伯来语 (he)、印地语 (hi)、克罗地亚语 (hr)、海地克里奥尔语 (ht)、匈牙利语 (hu)、亚美尼亚语 (hy)、印尼语 (id)、伊博语 (ig)、伊洛卡诺语 (ilo)、冰岛语 (is)、意大利语 (it)、日语 (ja)、爪哇语 (jv)、格鲁吉亚语 (ka)、哈萨克语 (kk)、高棉语 (km)、卡纳达语 (kn)、韩语 (ko)、卢森堡语 (lb)、干达语 (lg)、林加拉语 (ln)、老挝语 (lo)、立陶宛语 (lt)、拉脱维亚语 (lv)、马达加斯加语 (mg)、马其顿语 (mk)、马拉雅拉姆语 (ml)、蒙古语 (mn)、马拉地语 (mr)、马来语 (ms)、缅甸语 (my)、尼泊尔语 (ne)、荷兰语; 佛兰芒语 (nl)、挪威语 (no)、北索托语 (ns)、奥克语 (oc)、奥里亚语 (or)、旁遮普语 (pa)、波兰语 (pl)、普什图语 (ps)、葡萄牙语 (pt)、罗马尼亚语; 摩尔多瓦语 (ro)、俄语 (ru)、信德语 (sd)、僧伽罗语 (si)、斯洛伐克语 (sk)、斯洛文尼亚语 (sl)、索马里语 (so)、阿尔巴尼亚语 (sq)、塞尔维亚语 (sr)、斯瓦蒂语 (ss)、巽他语 (su)、瑞典语 (sv)、斯瓦希里语 (sw)、泰米尔语 (ta)、泰语 (th)、他加禄语 (tl)、茨瓦纳语 (tn)、土耳其语 (tr)、乌克兰语 (uk)、乌尔都语 (ur)、乌兹别克语 (uz)、越南语 (vi)、沃洛夫语 (wo)、科萨语 (xh)、意第绪语 (yi)、约鲁巴语 (yo)、中文 (zh)、祖鲁语 (zu)
📄 许可证
本项目采用MIT许可证。
📖 引用
如果您在研究中使用了该模型,请引用以下工作:
@inproceedings{mohammadshahi-etal-2022-small,
title = "{SM}a{LL}-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages",
author = "Mohammadshahi, Alireza and
Nikoulina, Vassilina and
Berard, Alexandre and
Brun, Caroline and
Henderson, James and
Besacier, Laurent",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.571",
pages = "8348--8359",
abstract = "In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the {``}curse of multilinguality{''}, these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100(12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference.",
}
@inproceedings{mohammadshahi-etal-2022-compressed,
title = "What Do Compressed Multilingual Machine Translation Models Forget?",
author = "Mohammadshahi, Alireza and
Nikoulina, Vassilina and
Berard, Alexandre and
Brun, Caroline and
Henderson, James and
Besacier, Laurent",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.317",
pages = "4308--4329",
abstract = "Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.",
}



