Organizational Unit:
School of Psychology

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Perceived Relational Risk and Perceived Situational Risk: Scale Development
    (Georgia Institute of Technology, 2020-11-05) Stuck, Rachel E.
    Interactions with technology are a significant part of daily life, both at home and at work. Understanding how to support successful human-technology interaction is essential for Engineering Psychology. Perceived relational and situational risk are key components to understanding interactions with technologies including adoption, trust, and use. However, perceived risk was only recently separated into these two distinct types: relational and situational. In addition, prior measures of perceived risk focus on hazards, not interactions with technology or automation. The goal of this dissertation was to develop and validate scales of perceived relational risk and perceived situational risk. These scales built on previous work exploring perceived risk and incorporated scale items related to affect, probability, severity, and domains. Evaluations of internal reliability, construct validity, and test-retest reliability were conducted for both scales. The items for both scales had excellent internal reliability, acceptable test-retest reliability, and support for construct validity. After determining the validity of the items, items were selected to create the final scales. These scales allow future researchers to rigorously and accurately study how perceived relational risk and perceived situational risk affect with trust, each other, and technology use.
  • Item
    Development and validation of the situational trust scale for automated driving (STS-AD)
    (Georgia Institute of Technology, 2020-05-26) Holthausen, Brittany Elise
    Trust in automation is currently operationalized with general measures that are either self-report or behavioral in nature. However, a recent review of the literature suggests that there should be a more specific approach to trust in automation as different types of trust are influenced by different factors (Hoff & Bashir, 2015). This work is the development and validation of a measure of situational trust for the automated driving context: The Situational Trust Scale – Automated Driving (STS-AD). The first validation study showed that situational trust is a separable construct from general trust in automation and that it can capture a range of responses as seen in the difference between scores after watching a near automation failure video and non-failure videos. The second study aimed to test the STS-AD in a mid-fidelity driving simulator. Participants drove two routes: low automation (automated lane keeping only) high automation (adaptive cruise control with automated lane keeping). The results of the second study provided further support for situational trust as a distinct construct, provided insight into the factorial structure of the scale, and pointed towards a distinction between advanced driver assistance systems (ADAS) and automated driving systems (ADS). The STS-AD will revolutionize the way that trust in automation is conceptualized and operationalized. This measure opens the door to a more nuanced approach to trust in automation measurement that will inform not only how drivers interact with automated systems; but, can impact how we understand human-automation interaction as a whole.