language: zh
widget:
中文RoBERTa-Base文本分类模型集
模型描述
本系列包含5个基于UER-py框架微调的中文RoBERTa-Base分类模型,该框架在此论文中提出。此外,这些模型也可通过腾讯预训练框架TencentPretrain进行微调,该框架继承UER-py以支持十亿参数级模型,并扩展为多模态预训练系统(详见论文)。
您可以通过以下两种方式获取这5个中文RoBERTa-Base分类模型:
使用方式
可直接通过pipeline进行文本分类(以中国新闻分类模型为例):
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("北京上个月召开了两会")
[{'label': '中国大陆政治', 'score': 0.7211663722991943}]
训练数据
采用5个中文文本分类数据集:
- 京东全量/二分类数据集和大众点评数据集包含不同情感极性的用户评论
- 凤凰新闻和中国新闻数据集包含不同主题类别的新闻首段落
数据来源于Glyph项目,更多细节参见相关论文
训练流程
模型在腾讯云平台通过UER-py微调完成。基于预训练模型chinese_roberta_L-12_H-768,以512的序列长度微调3个epoch。每个epoch结束时,当开发集达到最佳性能即保存模型。所有模型采用相同超参数。
以中国新闻分类模型为例:
python3 finetune/run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/glyph/chinanews/train.tsv \
--dev_path datasets/glyph/chinanews/dev.tsv \
--output_model_path models/chinanews_classifier_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
最后将模型转换为Huggingface格式:
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
文献引用
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{zhang2017encoding,
title={Which encoding is the best for text classification in chinese, english, japanese and korean?},
author={Zhang, Xiang and LeCun, Yann},
journal={arXiv preprint arXiv:1708.02657},
year={2017}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}