Socially Persuading Prosocial Behavior in Humans Using Automation: A Design Framework and Theoretical Model

Author(s)
Scott-Sharoni, Sidney
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Supplementary to:
Abstract
Investigating methods to improve prosocial behavior has been a recent topic of interest for researchers studying human-automation and human-robot interaction. However, scientists have yet to uncover how and why certain features of technology enhance and encourage prosocial behavior in humans. By studying and facilitating prosocial actions between people, agents, and robots, not only will society benefit, but also personal factors such as psychological, social, and physical well-being will improve. The following preliminary examination paper reviews literature on automation, agent, and robot interaction with humans, focusing on their existence as social actors. Decades of work built from the media equation (Reeves & Nass, 1996) suggest that humans transfer social expectations, norms, and biases to non-human agents. This social existence affords non-human agents new possibilities to shape and persuade humans to increase prosocial behavior. To contextualize prosocial behavior in human-to-automation (H-A) interactions, definitions, motivations, and benefits of prosocial behavior within human-human (H-H) interactions and H-A contexts are discussed. The review highlights the lack of comparison researchers have made between the two domains. Additionally, it provides the first encompassing and comprehensive definition of H-A prosocial behavior, including core components necessary for its study. Throughout the review, there is an emphasis on understanding how social influences, that are paramount to H-H prosocial behavior, transfer to H-A contexts. While researchers assume, based on the media equation (Reeves & Nass, 1996), that social influences remain consistent across domains, the review discuses differences in H-A social influence. The theoretical role of social influence in prosocial behavior is detailed, as understanding conformity and persuasion is necessary to build agents and robots that encourage prosocial behavior in humans. Models of human to robot and agent social influence are examined with an exploration into how the Robot Social Influence model (Erel et al., 2024) can explain findings in prosocial behavior and persuasive social computing. The review presents and justifies a novel design framework and model that examines how and why specific characteristics in robots and virtual agents can promote prosocial behavior in human users. The framework, Robots and Agents as Persuasive Prosocial Actors (RAPPA), combines principles from persuasive social computing, social influence, and findings from the limited work within H-A prosocial behavior research. The theoretical model argues that anthropomorphism, social intelligence, and adaptiveness increase a human’s relatability, or sense of belonging, to the technology, which strengthens the automation’s influence on a human. This social influence can then persuade humans to behave prosocially. This is based on multiple theories that assert that close group identity increases social influence and prosocial behavior. Definitions and empirical evidence for each element of the RAPPA framework are provided, along with recommendations for its implementation into agent and robot design. The framework is then connected back to theories of human behavior such as the theory of planned behavior (Ajzen, 1991) and social learning theory (Bandura, 1971). RAPPA serves to enhance the understanding of how H-A prosocial behavior develops and provide scientists with a valuable reference for future work. The paper culminates in a series of applications and research topics that encourage researchers to include the framework in the study of in-vehicle agents.
Sponsor
Date
2024
Extent
Resource Type
Text
Resource Subtype
Paper
Rights Statement
Unless otherwise noted, all materials are protected under U.S. Copyright Law and all rights are reserved