Personalized dialogue content generation based on deep learning

Hao Wang, Bin Guo, Shao Yang Hao, Qiu Yun Zhang, Zhi Wen Yu

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Dialogue system is a very important research direction in the field of Human–Machine Interaction and the research of open domain chatbot has attracted much attention. There are three main problems in the existing chatbots. The first is that they cannot effectively capture the context, which leads to the lack of logical cohesion in the dialogue content. Second, most of the existing chatbots do not have specific personalized characteristics, resulting in the monotony in the chat process, and the dialogue content may be contradictory. Third, they tend to generate meaningless replies such as “I don’t know” or “I’m sorry”, which greatly reduces users’ interest in chat. The Encoder-Decoder framework based on Transformer was used to build the general dialogue model and personalized dialogue model. By encoding the historical dialogue content and personalized feature information, the model could effectively capture the context and the personalized information and realize multi-round dialogue process, generating personalized dialogue content. The experimental results showed that the dialogue model based on Transformer obtained better results on the evaluation metrics of perplexity and F1-score compared to the baseline models. Combined with manual evaluation, it is concluded that our dialogue model is capable of carrying out multi-round dialogues, with high content diversity and in line with the given personalized characteristics.

Original languageEnglish
Pages (from-to)210-216
Number of pages7
JournalJournal of Graphics
Volume41
Issue number2
DOIs
StatePublished - 2020

Keywords

  • chatbot
  • context aware
  • deep learning
  • dialogue system
  • personalization

Fingerprint

Dive into the research topics of 'Personalized dialogue content generation based on deep learning'. Together they form a unique fingerprint.

Cite this