A Comparison Study of Human Evaluated Automated Highlighting Systems


Publication date: December 2, 2018

Sasha Spala, Franck Dernoncourt, Walter Chang, Carl Dockhorn

Automatic text highlighting aims to identify key portions that are most important to a reader. In this paper, we explore the use of existing extractive summarization models for automatically generating highlights; automatic highlight generation has not previously been addressed from this perspective. Evaluation studies typically rely on automated evaluation metrics as they are cheap to compute and scale well. However, these metrics are not designed to assess automated highlighting. We therefore focus on human evaluations in this work. Our comparison of multiple summarization models used for automated highlighting accompanied by human evaluation provides an approximate upper bound of the quality of future highlighting models.

Learn More