Title:
Model Blindness: Investigating a model-based route-recommender system’s impact on decision making

dc.contributor.advisor Thomas, Rick P.
dc.contributor.author Parmar, Sweta
dc.contributor.committeeMember Feigh, Karen M.
dc.contributor.committeeMember Varma, Sashank
dc.contributor.committeeMember Whitaker, Elizabeth T.
dc.contributor.committeeMember Gorman, Jamie C.
dc.contributor.department Psychology
dc.date.accessioned 2023-01-10T16:22:44Z
dc.date.available 2023-01-10T16:22:44Z
dc.date.created 2022-12
dc.date.issued 2022-12-14
dc.date.submitted December 2022
dc.date.updated 2023-01-10T16:22:44Z
dc.description.abstract Model-Based Decision Support Systems (MDSS) are prominent in many professional domains of high consequence, such as aeronautics, emergency management, military command and control, healthcare, nuclear operations, intelligence analysis, and maritime operations. An MDSS generally uses a simplified model of the task and the operator to impose structure to the decision-making situation and provide information cues to the operator that is useful for the decision-making task. Models are simplifications, can be misspecified, and have errors. Adoption and use of these errorful models can lead to the impoverished decision-making of users. I term this impoverished state of the decision-maker model blindness. A series of two experiments were conducted to investigate the consequences of model blindness on human decision-making and performance and how those consequences can be mitigated via an explainable AI (XAI) intervention. The experiments implemented a simulated route recommender system as an MDSS with a true data-generating model (unobservable world model). In Experiment 1, the true model generating the recommended routes was misspecified to different levels to impose model blindness on users. In Experiment 2, the same route-recommender system was employed with a mitigation technique to overcome the impact of model-misspecifications on decision-making. Overall, the results of both experiments provide little support for performance degradation due to model blindness imposed by misspecified systems. The XAI intervention provided valuable insights into how participants adjusted their decision-making to account for bias in the system and deviated from choosing the model-recommended alternatives. The participants' decision strategies revealed that they could understand model limitations from feedback and explanations and could adapt their strategy to account for those misspecifications. The results provide strong support for evaluating the role of decision strategies in the model blindness confluence model. These results help establish a need for carefully evaluating model blindness during the development, implementation, and usage stages of MDSS.
dc.description.degree Ph.D.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/70118
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Model blindness
dc.subject Model misspecifications
dc.subject Decision making
dc.subject Decision support systems
dc.subject Explainable AI
dc.subject Performance Degradation
dc.subject Trust
dc.subject Reliance
dc.title Model Blindness: Investigating a model-based route-recommender system’s impact on decision making
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.advisor Thomas, Rick P.
local.contributor.corporatename College of Sciences
local.contributor.corporatename School of Psychology
relation.isAdvisorOfPublication 44e4bb42-7dc6-4dd1-80a5-2532238057b1
relation.isOrgUnitOfPublication 85042be6-2d68-4e07-b384-e1f908fae48a
relation.isOrgUnitOfPublication 768a3cd1-8d73-4d47-b418-0fc859ce897d
thesis.degree.level Doctoral
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
PARMAR-DISSERTATION-2022.pdf
Size:
4.28 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.86 KB
Format:
Plain Text
Description: