🚀 imvladikon/sentence-transformers-alephbert[WIP]
这是一个sentence-transformers模型,它可以将句子和段落映射到768维的密集向量空间,可用于聚类或语义搜索等任务。当前版本是在私有语料库上对LaBSE模型进行蒸馏得到的。
🚀 快速开始
安装依赖
如果你安装了sentence-transformers,使用这个模型会变得很容易:
pip install -U sentence-transformers
使用示例
基础用法
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = [
"הם היו שמחים לראות את האירוע שהתקיים.",
"לראות את האירוע שהתקיים היה מאוד משמח להם."
]
model = SentenceTransformer('imvladikon/sentence-transformers-alephbert')
embeddings = model.encode(sentences)
print(cos_sim(*tuple(embeddings)).item())
高级用法
如果没有安装sentence-transformers,你可以这样使用该模型:首先,将输入传递给transformer模型,然后对上下文词嵌入应用正确的池化操作。
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = [
"הם היו שמחים לראות את האירוע שהתקיים.",
"לראות את האירוע שהתקיים היה מאוד משמח להם."
]
tokenizer = AutoTokenizer.from_pretrained('imvladikon/sentence-transformers-alephbert')
model = AutoModel.from_pretrained('imvladikon/sentence-transformers-alephbert')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
cos_sim = nn.CosineSimilarity(dim=0, eps=1e-6)
print(cos_sim(sentence_embeddings[0], sentence_embeddings[1]).item())
📚 详细文档
评估结果
要对该模型进行自动评估,请参阅 Sentence Embeddings Benchmark:https://seb.sbert.net
训练参数
该模型使用以下参数进行训练:
数据加载器
torch.utils.data.dataloader.DataLoader
,长度为44999,参数如下:
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
损失函数
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss
,参数如下:
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
fit() 方法的参数
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 44999,
"weight_decay": 0.01
}
完整模型架构
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
引用与作者
@misc{seker2021alephberta,
title={AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With},
author={Amit Seker and Elron Bandel and Dan Bareket and Idan Brusilovsky and Refael Shaked Greenfeld and Reut Tsarfaty},
year={2021},
eprint={2104.04052},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{reimers2019sentencebert,
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
author={Nils Reimers and Iryna Gurevych},
year={2019},
eprint={1908.10084},
archivePrefix={arXiv},
primaryClass={cs.CL}
}