What Happens When a Robot Lies to You? Investigating Aspects of Prosocial Intelligent Agent Deception Towards Humans

Author(s)
Rogers, Kantwon Lamount
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
School of Interactive Computing
School established in 2007
Supplementary to:
Abstract
People across many societies are explicitly taught some form of the adage “honesty is the best policy”, but is that a lie? Telling the truth is not always helpful, and lying is not always harmful. In truth, everyone lies. We lie to help ourselves, and we lie to help others. We lie in both serious and inconsequential situations. Lying is a foundational part of how people interact with each other, and accepted members of society are successfully able to navigate the highly nuanced norms of social deception. Robots and artificially intelligent (AI) systems are increasingly being placed within our societies, and in some contexts, they are expected to interact with humans socially. People must trust that robots are functionally competent to complete tasks while also being socially competent to understand social conventions that may favor particular strategies over others. If people often successfully choose lying to be the best policy in certain situations, it then follows that a robot, that is designed to learn from humans and exhibit social competency, may replicate expected lying behavior as it becomes fully integrated into social settings. In this thesis I explore robots that lie to benefit others and how deception influences people’s interactions and perceptions of robots. My work examines how managing expectations, the influence of agent design and presence, and the aftermath of deception shape human responses, while also exploring how people interact with autonomous deceptive agents over time.
Sponsor
Date
2024-12-10
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI