Automatically highlighting a text aims at identifying key portions that are the most important to a reader. In this paper, we present a web-based framework 1 designed to efficiently and scalably crowdsource two independent but related tasks: collecting highlight annotations, and comparing the performance of automated highlighting systems. The first task is necessary to understand human preferences and train supervised automated highlighting systems. The second task yields a more accurate and fine-grained evaluation than existing automated performance metrics.
Learn More