Publications

Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text

Conference on Human Factors in Computing Systems (CHI 2025) - Workshop on Human-centered Evaluation and Auditing of Language Models

Publication date: April 27, 2025

Jennifer Healey, Laurie Byrum, Md. Nadeem Akhtar, Surabhi Bhargava, Moumita Sinha

LLM evaluation is challenging even the case of base models. In real world deployments, evaluation is further complicated by the interplay of task specific prompts and experiential context. At scale, bias evaluation is often based on short context, fixed choice benchmarks that can be rapidly evaluated, however, these can lose validity when the LLMs' deployed context differs. Large scale human evaluation is often seen as too intractable and costly. Here we present our journey towards developing a semi-automated bias evaluation framework for free text responses that has human insights at its core. We discuss how we developed an operational definition of bias that helped us automate our pipeline and a methodology for classifying bias beyond multiple choice. We additionally comment on how human evaluation helped us uncover problematic templates in a bias benchmark.

Learn More