标签:
用于多语言问答的bert-base-multilingual-uncased模型
概述
语言模型: bert-base-multilingual-uncased
下游任务: 抽取式问答
训练数据: XQuAD
测试数据: XQuAD
超参数
batch_size = 48
n_epochs = 6
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
性能
在XQuAD预留测试集上的评估结果
"exact_match": 64.6067415730337,
"f1": 79.52043478874286,
"test_samples": 2384
使用方式
在Transformers中使用
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/bert-base-multilingual-xquad"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': '为什么模型转换很重要?',
'context': '能够在FARM和transformers之间转换模型,为用户提供了自由,让人们可以轻松地在不同框架间切换。'
}
res = nlp(QA_input)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
在FARM中使用
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/bert-base-multilingual-xquad"
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["为什么模型转换很重要?"],
"text": "能够在FARM和transformers之间转换模型,为用户提供了自由,让人们可以轻松地在不同框架间切换。"}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
在Haystack中使用
reader = FARMReader(model_name_or_path="alon-albalak/bert-base-multilingual-xquad")
reader = TransformersReader(model="alon-albalak/bert-base-multilingual-xquad",tokenizer="alon-albalak/bert-base-multilingual-xquad")
FARM和Haystack的使用说明改编自 https://huggingface.co/deepset/xlm-roberta-large-squad2