🚀 E5-base-无监督模型
本模型与 e5-base 类似,但未经过有监督的微调。
通过弱监督对比预训练生成文本嵌入。
梁旺、杨楠、黄晓龙、焦秉兴、杨林军、蒋大新、兰甘·马朱姆德、魏富如,arXiv 2022
该模型有 12 层,嵌入维度为 768。
🚀 快速开始
本模型可用于对文本进行编码,下面将详细介绍其使用方法。
✨ 主要特性
- 与 e5-base 类似,但未经过有监督的微调。
- 模型有 12 层,嵌入维度为 768。
💻 使用示例
基础用法
以下是一个对 MS-MARCO 段落排名数据集中的查询和段落进行编码的示例:
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-unsupervised')
model = AutoModel.from_pretrained('intfloat/e5-base-unsupervised')
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
高级用法
以下是使用 sentence_transformers
库的示例:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-base-unsupervised')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
包依赖:
pip install sentence_transformers~=2.2.2
📚 详细文档
训练详情
请参考我们的论文 https://arxiv.org/pdf/2212.03533.pdf。
基准评估
请查看 unilm/e5 以复现该模型在 BEIR 和 MTEB 基准 上的评估结果。
常见问题解答
⚠️ 重要提示
以下是使用该模型时的常见问题解答。
💡 使用建议
在使用模型前,建议仔细阅读常见问题解答部分,以避免常见错误。
1. 是否需要在输入文本前添加 "query: " 和 "passage: " 前缀?
是的,模型是按照这种方式进行训练的,否则模型性能会下降。
以下是一些使用建议:
- 对于非对称任务,如开放问答中的段落检索、即席信息检索,应分别使用 "query: " 和 "passage: " 前缀。
- 对于对称任务,如语义相似度、释义检索,使用 "query: " 前缀。
- 如果想将嵌入用作特征,如线性探测分类、聚类,使用 "query: " 前缀。
2. 为什么我复现的结果与模型卡片中报告的结果略有不同?
不同版本的 transformers
和 pytorch
可能会导致性能出现细微但非零的差异。
引用
如果您觉得我们的论文或模型有帮助,请按以下格式引用:
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
局限性
该模型仅适用于英文文本,长文本将被截断为最多 512 个词元。
📄 许可证
本模型采用 MIT 许可证。
贡献者:michaelfeil