BioKnowPrompt: Incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction

Qing Li, Yichen Wang, Tao You, Yantao Lu

科研成果: 期刊稿件文章同行评审

15 引用 (Scopus)

摘要

Domain tuning pre-trained language models (PLMs) with task-specific prompts have achieved great success in different domains. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, which directly bridges the gap between pre-training tasks and various downstream tasks. Large unlabelled corpora in the biomedical domain have been created in the last decade(i.e., PubMed, PMC, MIMIC, and ScienceDirect). In this paper, we introduce BioKnowPrompt, a prompt-tuning PLMs model that has been incorporating imprecise knowledge into verbalizer for biomedical text relation extraction. In particular, we use learnable words and learnable relation words to infuse entity and relation information into quick creation, and we use biomedical domain knowledge constraints to synergistically improve their representation. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks such as relation extraction. BioKnowPrompt has a lot of significant potential in few-shot learning, which outperforms the previous models and achieves state-of-the-art on the 5 datasets.

源语言英语
页(从-至)346-358
页数13
期刊Information Sciences
617
DOI
出版状态已出版 - 12月 2022

指纹

探究 'BioKnowPrompt: Incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction' 的科研主题。它们共同构成独一无二的指纹。

引用此