We propose an entailment-aware encoder under multi-task framework (i.e., summarization generation and entailment recognition) and an entailment-aware decoder by entailment Reward Augmented Maximum Likelihood (RAML) training. Considering a correct summary is semantically entailed by the source sentence, we incorporate entailment knowledge into abstractive summarization models. We argue that correctness is an essential requirement for summarization systems. Neural sequence-to-sequence models have gained considerable success for this task, while most existing approaches only focus on improving the informativeness of the summary, which ignore the correctness, i.e., the summary should not contain unrelated information with respect to the source sentence. Publisher = "Association for Computational Linguistics",Ībstract = "In this paper, we investigate the sentence summarization task that produces a summary from a source sentence. Cite (Informal): Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization (Li et al., COLING 2018) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Data DUC 2004, = "Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization",īooktitle = "Proceedings of the 27th International Conference on Computational Linguistics", ![]() Association for Computational Linguistics. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1430–1441, Santa Fe, New Mexico, USA. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization. Pierre Isabelle Venue: COLING SIG: Publisher: Association for Computational Linguistics Note: Pages: 1430–1441 Language: URL: DOI: Bibkey: li-etal-2018-ensure Cite (ACL): Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. Anthology ID: C18-1121 Volume: Proceedings of the 27th International Conference on Computational Linguistics Month: August Year: 2018 Address: Santa Fe, New Mexico, USA Editors: Emily M. Experiment results demonstrate that our models significantly outperform baselines from the aspects of informativeness and correctness. ![]() Abstract In this paper, we investigate the sentence summarization task that produces a summary from a source sentence.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |