Personalizing with Human Cognitive Biases

User Modeling and Adaptation conference - Theory Opinion and Reflection: UMAP 19 - TOR

Published June 9, 2019

Georgios Theocharous, Jennifer Healey, Sridhar Mahadevan, Michelle Saad

Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today’s recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.

Learn More