Bridging Realms: Advancing Multi-Robot Systems Through Trust, Shared Mental Models, and User-Centric Interfaces

Author(s)
Schroepfer, Pete Car
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
School of Interactive Computing
School established in 2007
Supplementary to:
Abstract
The idea of inanimate objects coming to life can be traced back to ancient myths and legends across various cultures. In Greek mythology, Hephaestus, the god of fire and craftsmanship, created automata to assist him in his workshop. In Jewish folklore, one can find the origins of the concept of the Golem, a creature animated from inanimate material and brought to life through mystical rituals. While not rooted in magic, the concept of the robot seems closely related to these mythical ideas, as robots are composed of inanimate objects that, when assembled, can serve a variety of functions within society. Like their mythical predecessors, an essential quality of robots is that they do not exist in a vacuum, but instead are tangible, embodied computers that coexist with humans and must, therefore, interact with them. This interaction necessitates not only technological compatibility but also a consideration of ethical, social, and psychological dimensions. Robots are often designed to perform tasks that are either too dangerous, repetitive, or complex for humans, which positions them as both tools and companions in various capacities. As we continue to integrate robots into everyday life, the challenge lies in ensuring they enhance societal welfare without infringing on privacy, autonomy, or causing economic displacement. Not only must they integrate safely, but humans must also accept and learn to understand robots. As interactions with robots constitute a relatively novel experience for most individuals, a range of biases and preconceived notions potentially skews the mental models that humans develop about these systems. Key among these biases are anthropomorphism, where individuals ascribe human traits to robots, engendering either unrealistic expectations or undue apprehensions; automation bias, characterized by an over-reliance on automated systems which may lead to overlooking system malfunctions or superior human judgments; and the novelty effect, in which the initial fascination with robots can distort perceptions of their utility or efficacy. Addressing and understanding these biases is crucial as they significantly influence the integration of robots into diverse settings such as workplaces, households, and public areas, impacting policy formation, design considerations, and interaction protocols. These same biases and complications not only exist in direct interactions but also in the case of teleoperated or tele-supervised robots, where the physical separation between the human operator and the robotic unit introduces additional layers of complexity. In teleoperation, issues such as latency, reduced situational awareness, and depersonalization can exacerbate the psychological distance between users and robotic actions. This detachment can lead to further biases such as out-of-sight, out-of-mind phenomena, where operators may exhibit less caution or ethical consideration due to the perceived remoteness of the robot's environment. Moreover, the abstraction of control in teleoperation can induce a disassociation effect, where operators feel less personally accountable for the robot’s actions, potentially leading to ethical lapses or diminished empathy for affected parties. Adding to the complexity, certain tasks cannot be handled by a single robot, necessitating an understanding of how dynamics of trust and mental model formation shift when dealing with teams of robots, particularly heterogeneous robot teams. In such scenarios, the interplay between different types of robots—each with unique capabilities and roles—introduces a layer of complexity in human-robot interaction that mirrors human team dynamics but also presents unique challenges. Issues such as the allocation of trust among different robots, coordination efficiency, and the integration of varied robotic capabilities into a cohesive unit must be considered. Researchers must explore how these factors influence human operators’ expectations and their operational strategies. Additionally, the presence of diverse robots working in concert may lead to shifts in the human mental models due to varied perceptions of each robot's reliability and the overall effectiveness of the team. This requires a sophisticated understanding of collective behavior in robotic systems and its impact on human trust and dependency, which are critical for the successful deployment of robot teams in complex environments. As the backdrop for the research presented in this dissertation, we specifically examine the deployment of a heterogeneous robot team tasked with performing ship or container inspections. A primary goal of the project that underpinned much of this research was user acceptance of a robot system that would ultimately enhance worker safety by transitioning the manual inspection tasks away from direct human involvement. Here, workers are enabled to apply their domain knowledge more effectively by overseeing a teleoperated team of autonomous robots. These robots are designed to carry out safety inspections of ships or storage containers, thus reducing the exposure of human inspectors to potential hazards and improving the overall efficiency of the inspection process. This shift not only leverages technological advancements to safeguard human workers but also enriches the role of human expertise by focusing it where it is most impactful. Much of the research presented here focuses on the components of this project related to Human Robot Interaction (HRI) and Human Computer Interaction (HCI). This research examined how users, who traditionally performed measurements manually, would adapt their roles within the context of this automated system. As noted above, this shift presents multiple challenges from a user-centered design perspective to incorporate acceptance and proper usage. This dissertation begins with an in-depth examination of the current state of trust within the context of HRI. It then explores how shared mental models and trust dynamics within the context of a heterogeneous robot team can impact system design. Subsequent sections detail a study on how heterogeneous robot teams might influence both mental model development and trust dynamics. Drawing on common heuristics in HCI, we then present a novel localization method that not only enhances localization accuracy but also increases consistency between user expectations and the user interface representations by constraining motion to a predefined mesh. Following this, we introduce a system designed to integrate these concepts within a complex, multi-stakeholder task environment. Finally, we examine important theoretical considerations such as learning models, skill acquisition models, and cognitive load theory for training operators to go from novice users to experts and present a proof-of-concept system designed to demonstrate how one might incorporate cognitive load measurements into a dynamic training system.
Sponsor
Date
2025-01-21
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI