Generating Formality-tuned Summaries Using Token-based Rewards

SIGNLL Conference on Computational Natural Language Learning (CONLL)

Publication date: November 4, 2019

Kushal Chawla, Balaji Vasan Srinivasan, Niyati Chhaya

Abstractive text summarization aims at generating human-like summaries by understanding and paraphrasing the given input content. Recent efforts based on sequence-to-sequence networks only allow the generation of a single summary. However, it is often desirable to accommodate the psycho-linguistic preferences of the intended audience while generating the summaries. In this work, we present a reinforcement learning based approach to generate formality-tailored summaries for an input article. Our novel input-dependent reward function aids in training the model with stylistic feedback on sampled and ground-truth summaries together. Once trained, the same model can generate formal and informal summary variants. Our automated and qualitative evaluations show the viability of the proposed framework.

Learn More