标签:
语言: en
许可证: apache-2.0
数据集:
ALBERT基础版v1
这是一个基于英语语料、采用掩码语言建模(MLM)目标预训练的模型。该模型由这篇论文提出,并首次发布于此代码库。与所有ALBERT模型一样,此模型不区分大小写:例如"english"和"English"被视为相同。
免责声明:发布ALBERT的团队未为此模型编写说明卡片,故本说明卡片由Hugging Face团队撰写。
模型描述
ALBERT是基于Transformer架构的模型,通过自监督方式在海量英文文本上预训练而成。这意味着它仅对原始文本进行预训练,无需任何人工标注(因此可利用大量公开数据),通过自动化过程从文本生成输入和标签。具体而言,其预训练包含两个目标:
- 掩码语言建模(MLM):随机遮蔽输入句子中15%的词汇,然后通过模型预测被遮蔽的词汇。与传统循环神经网络(RNN)逐词处理或GPT等自回归模型内部遮蔽未来词不同,该方法使模型能学习句子的双向表征。
- 句子顺序预测(SOP):ALBERT通过预测连续文本段的顺序进行预训练。
由此,模型学习到英语语言的内在表征,可用于提取下游任务所需的特征。例如,若您有一个带标注句子的数据集,可以基于ALBERT模型生成的特征训练标准分类器。
ALBERT的独特之处在于其Transformer层共享权重。因此所有层具有相同参数。这种层重复设计显著减少了内存占用,但由于需要迭代相同数量的(重复)层,其计算成本与具有相同隐藏层数的BERT架构相当。
此为基础模型的第一个版本。版本2因不同的丢弃率、额外训练数据和更长的训练时间而与版本1存在差异,在几乎所有下游任务中表现更优。
本模型配置如下:
- 12个重复层
- 128维词嵌入
- 768维隐藏层
- 12个注意力头
- 1100万参数
预期用途与限制
原始模型可用于掩码语言建模或下一句预测,但主要用途是下游任务微调。可通过模型中心查找您感兴趣任务的微调版本。
注意:此模型主要针对需要整句(可能含掩码)决策的任务进行微调,如序列分类、标记分类或问答。文本生成类任务建议使用GPT2等模型。
使用方法
可直接使用管道进行掩码语言建模:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-base-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"▁modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"▁modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"▁model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"▁runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"▁lingerie"
}
]
在PyTorch中获取文本特征:
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = AlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
在TensorFlow中:
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = TFAlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
局限性与偏差
尽管训练数据相对中立,模型仍可能产生有偏预测:
>>> unmasker("The man worked as a [MASK].")
[
{"sequence":"[CLS] the man worked as a chauffeur.[SEP]", ...},
{"sequence":"[CLS] the man worked as a janitor.[SEP]", ...},
{"sequence":"[CLS] the man worked as a shoemaker.[SEP]", ...},
{"sequence":"[CLS] the man worked as a blacksmith.[SEP]", ...},
{"sequence":"[CLS] the man worked as a lawyer.[SEP]", ...}
]
>>> unmasker("The woman worked as a [MASK].")
[
{"sequence":"[CLS] the woman worked as a receptionist.[SEP]", ...},
{"sequence":"[CLS] the woman worked as a janitor.[SEP]", ...},
{"sequence":"[CLS] the woman worked as a paramedic.[SEP]", ...},
{"sequence":"[CLS] the woman worked as a chauffeur.[SEP]", ...},
{"sequence":"[CLS] the woman worked as a waitress.[SEP]", ...}
]
此偏差会影响该模型的所有微调版本。
训练数据
ALBERT模型预训练数据包含:
训练流程
预处理
文本经小写处理和SentencePiece分词(词汇量30,000)。模型输入格式为:
[CLS] 句子A [SEP] 句子B [SEP]
训练
ALBERT遵循BERT的训练设置。具体遮蔽策略:
- 15%的标记被遮蔽
- 其中80%替换为[MASK]
- 10%替换为随机标记
- 剩余10%保持原词
评估结果
下游任务微调后,ALBERT模型表现如下:
|
平均分 |
SQuAD1.1 |
SQuAD2.0 |
MNLI |
SST-2 |
RACE |
V2 |
|
|
|
|
|
|
ALBERT-base |
82.3 |
90.2/83.2 |
82.1/79.3 |
84.6 |
92.9 |
66.8 |
ALBERT-large |
85.7 |
91.8/85.2 |
84.9/81.8 |
86.5 |
94.9 |
75.2 |
ALBERT-xlarge |
87.9 |
92.9/86.4 |
87.9/84.1 |
87.9 |
95.4 |
80.7 |
ALBERT-xxlarge |
90.9 |
94.6/89.1 |
89.8/86.9 |
90.6 |
96.8 |
86.8 |
V1 |
|
|
|
|
|
|
ALBERT-base |
80.1 |
89.3/82.3 |
80.0/77.1 |
81.6 |
90.3 |
64.0 |
ALBERT-large |
82.4 |
90.6/83.9 |
82.3/79.4 |
83.5 |
91.7 |
68.5 |
ALBERT-xlarge |
85.5 |
92.5/86.1 |
86.1/83.1 |
86.4 |
92.4 |
74.8 |
ALBERT-xxlarge |
91.0 |
94.8/89.3 |
90.2/87.4 |
90.8 |
96.9 |
86.5 |
BibTeX引用信息
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}