Challenging the Benefit of Anthropomorphism on Human-AI Collaboration with AI Voice Agents

Author(s)
Scott-Sharoni, Sidney Tammie
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Supplementary to:
Abstract
Contrary to conventional wisdom, theoretical frameworks, and current trends in AI design, for human collaboration, simple robotic-sounding agents may be better than more complex anthropomorphic, or human-like, agents. This dissertation tested two extremes of anthropomorphism and social intelligence in an AI voice agent across four studies that examined various types of social influence. The results uncovered a consistent discrepancy between the subjective ratings of the agent and the social behavior. In the trivia task in Study 1, participants conformed less when they perceived the AI agent as more anthropomorphic, despite viewing the more anthropomorphic agent as more likable. In the moral judgment task in Study 2, participants conformed less to the anthropomorphic agent than the robotic agent, regardless of the agent’s morality, which, again, contrasted with the subjective ratings. In the prisoner’s dilemma task in Study 3, participants cooperated less with the anthropomorphic agent as they applied human social behaviors to the AI (e.g., retaliating to the degree of lowering their game score) that were not found in interactions with the robotic agent. In the automated vehicle task, compliance varied by the agent type, agent driving style, and driving scenario despite the anthropomorphic agent being consistently preferred. Evidently, the implementation of human qualities in an AI agent does not guarantee more conformity, cooperation, or compliance to the agent. A possible theoretical explanation, garnered from these four studies, is that automation bias amplifies the effects predicted by the Computers are Social Actors theory, leading people to have higher subconscious social performance expectations of an anthropomorphic AI agent in interactive tasks than a nonanthropomorphic agent or other humans. Developers should consider the desired human behavior, contextual factors, performance of the technology, and social influence type before applying human-like features to AI technology.
Sponsor
Date
2025-12
Extent
Resource Type
Text
Resource Subtype
Dissertation (PhD)
Rights Statement
Rights URI