模型简介
模型特点
模型能力
使用案例
license: bsd-3-clause language:
- en metrics:
- f1
- precision
- recall library_name: transformers pipeline_tag: text-classification tags:
- science
- scholarly datasets:
- TimSchopf/nlp_taxonomy_data
NLP 分类器
这是一个基于 BERT 的微调语言模型,用于根据 NLP 分类法 中的概念对 NLP 相关研究论文进行分类。
它是一个多标签分类器,可以预测 NLP 分类法中所有层次的概念。
如果模型识别出一个较低层次的概念,它会同时预测该概念及其在 NLP 分类法中的上位词。
该模型基于弱标注数据集微调,数据集包含来自 ACL Anthology、arXiv cs.CL 类别和 Scopus 的 178,521 篇科学论文。
在微调之前,模型使用 allenai/specter2_base 的权重进行初始化。
📄 论文: 探索自然语言处理研究的版图 (RANLP 2023)
💻 GitHub: https://github.com/sebischair/Exploring-NLP-Research
💾 数据: https://huggingface.co/datasets/TimSchopf/nlp_taxonomy_data
NLP 分类法
NLP 分类法的机器可读版本可在我们的代码仓库中以 OWL 文件形式获取: https://github.com/sebischair/Exploring-NLP-Research/blob/main/NLP-Taxonomy.owl
在我们的 NLP-KG 工作中,我们将此分类法扩展为 NLP 研究领域的大规模层次结构,并以 OWL 文件形式提供机器可读版本: https://github.com/NLP-Knowledge-Graph/NLP-KG-WebApp
如何使用微调模型
直接加载模型获取预测
from typing import List
import torch
from torch.utils.data import DataLoader
from transformers import BertForSequenceClassification, AutoTokenizer
# 加载模型和分词器
tokenizer = AutoTokenizer.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
model = BertForSequenceClassification.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
# 准备数据
papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
{'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
# 使用 [SEP] 标记连接标题和摘要
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
def predict_nlp_concepts(model, tokenizer, texts: List[str], batch_size=8, device=None, shuffle_data=False):
"""
辅助函数,用于预测科学论文的 NLP 概念
"""
# 分词处理
def tokenize_dataset(sentences, tokenizer):
sentences_num = len(sentences)
dataset = []
for i in range(sentences_num):
sentence = tokenizer(sentences[i], padding="max_length", truncation=True, return_tensors='pt', max_length=model.config.max_position_embeddings)
# 获取 input_ids、token_type_ids 和 attention_mask
input_ids = sentence['input_ids'][0]
token_type_ids = sentence['token_type_ids'][0]
attention_mask = sentence['attention_mask'][0]
dataset.append((input_ids, token_type_ids, attention_mask))
return dataset
tokenized_data = tokenize_dataset(sentences=texts, tokenizer=tokenizer)
# 获取模型的输入格式
input_ids = torch.stack([x[0] for x in tokenized_data])
token_type_ids = torch.stack([x[1] for x in tokenized_data])
attention_mask_ids = torch.stack([x[2].to(torch.float) for x in tokenized_data])
# 转换为 DataLoader
input_dataset = []
for i in range(len(input_ids)):
data = {}
data['input_ids'] = input_ids[i]
data['token_type_ids'] = token_type_ids[i]
data['attention_mask'] = attention_mask_ids[i]
input_dataset.append(data)
dataloader = DataLoader(input_dataset, shuffle=shuffle_data, batch_size=batch_size)
# 预测数据
if not device:
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.eval()
y_pred = torch.tensor([]).to(device)
for batch in dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
input_ids_batch = batch['input_ids']
token_type_ids_batch = batch['token_type_ids']
mask_ids_batch = batch['attention_mask']
with torch.no_grad():
outputs = model(input_ids=input_ids_batch, attention_mask=mask_ids_batch, token_type_ids=token_type_ids_batch)
logits = outputs.logits
predictions = torch.round(torch.sigmoid(logits))
y_pred = torch.cat([y_pred,predictions])
# 获取预测类别名称
prediction_indices_list = []
for prediction in y_pred:
prediction_indices_list.append((prediction == torch.max(prediction)).nonzero(as_tuple=True)[0])
prediction_class_names_list = []
for prediction_indices in prediction_indices_list:
prediction_class_names = []
for prediction_idx in prediction_indices:
prediction_class_names.append(model.config.id2label[int(prediction_idx)])
prediction_class_names_list.append(prediction_class_names)
return y_pred, prediction_class_names_list
# 预测 NLP 论文的概念
numerical_predictions, class_name_predictions = predict_nlp_concepts(model=model, tokenizer=tokenizer, texts=title_abs)
使用 pipeline 获取预测
from transformers import pipeline
pipe = pipeline("text-classification", model="TimSchopf/nlp_taxonomy_classifier")
# 准备数据
papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
{'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
# 使用 [SEP] 标记连接标题和摘要
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
pipe(title_abs, return_all_scores=True)
评估结果
该模型在手动标注的 828 篇 EMNLP 2022 论文测试集上进行了评估。以下是三次不同训练运行中,根据 NLP 分类法对论文进行分类的平均评估结果。由于类别分布非常不均衡,我们报告了微观分数。
- F1: 93.21
- 召回率: 93.99
- 精确率: 92.46
许可证
BSD 3-Clause License
引用信息
在学术论文和学位论文中引用我们的工作时,请使用以下 BibTeX 条目:
@inproceedings{schopf-etal-2023-exploring,
title = "探索自然语言处理研究的版图",
author = "Schopf, Tim and
Arabi, Karim and
Matthes, Florian",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.111",
pages = "1034--1045",
abstract = "作为一种理解、生成和处理自然语言文本的有效方法,自然语言处理(NLP)研究近年来迅速传播并被广泛采用。鉴于该领域研究工作的增加,研究社区已经对几种 NLP 相关方法进行了调查。然而,一项全面分类既定主题、识别趋势并概述未来研究领域的综合性研究仍然缺失。为了填补这一空白,我们对 ACL Anthology 中的研究论文进行了系统分类和分析。结果,我们呈现了研究版图的结构化概述,提供了 NLP 研究领域的分类








