🚀 事件抽取问答模型
本项目是一个问答模型,作为事件抽取系统的一部分,相关研究发表于ACL2021会议的论文:基于迁移学习的零样本事件抽取:挑战与见解。该模型采用预训练架构 roberta-large,并使用 QAMR 数据集进行微调。
🚀 快速开始
模型演示
若想查看模型的运行效果,可在“托管推理API”右侧的文本框中分别输入问题和上下文。
示例:
- 问题:
谁被杀了?
- 上下文:
警方称,周四,耶路撒冷市中心一个拥挤的露天市场发生汽车炸弹爆炸,造成至少两人死亡。
- 答案:
人
模型使用
📚 详细文档
BibTeX引用和引用信息
@inproceedings{lyu-etal-2021-zero,
title = "Zero-shot Event Extraction via Transfer Learning: {C}hallenges and Insights",
author = "Lyu, Qing and
Zhang, Hongming and
Sulem, Elior and
Roth, Dan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.42",
doi = "10.18653/v1/2021.acl-short.42",
pages = "322--332",
abstract = "Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies. In this work, we explore the possibility of zero-shot event extraction by formulating it as a set of Textual Entailment (TE) and/or Question Answering (QA) queries (e.g. {``}A city was attacked{''} entails {``}There is an attack{''}), exploiting pretrained TE/QA models for direct transfer. On ACE-2005 and ERE, our system achieves acceptable results, yet there is still a large gap from supervised approaches, showing that current QA and TE technologies fail in transferring to a different domain. To investigate the reasons behind the gap, we analyze the remaining key challenges, their respective impact, and possible improvement directions.",
}