Organizational Unit:
School of Psychology

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    Exploring the Robustness of the Surprisingly Popular Signal
    (Georgia Institute of Technology, 2023-04-18) Sukernek, Justin
    A large portion of the decision-making literature is concerned with forecasting the future, often using wisdom of the crowd as the basis for successful forecasts. However, crowd wisdom can be limited when the consensus is incorrect. Bayesian truth serum and the Surprisingly Popular algorithm, two novel methodologies in this space, offer solutions to this limitation by leveraging social sensing to calculate the 'surprisingly popular signal' at the respondent and question level, respectively. In this dissertation, I present three experiments that compare the three methodologies across forecasting, consumer decision-making, and general knowledge. In all three experiments, SP yielded the highest accuracy when utilizing a subsample of the most knowledgeable participants, a finding that is coherent with the existing literature. Experiment two incorporated social influence, uncovering a positive effect of disagreement on BTS scores and bidirectional effects of social influence on respondents' perceptions of how others would answer. Furthermore, two of the experiments demonstrate evidence of the BTS's ability to identify subsamples of participants that increase SP's accuracy, performing a similar function to domain knowledge. Finally, a process-based simulation of knowledge and social influence on SP and BTS is conducted, corroborating empirical findings. Overall, results provide promising evidence of SP's effectiveness across all task contexts, as well as some evidence for a potential new application for BTS.
  • Item
    Model Blindness: Investigating a model-based route-recommender system’s impact on decision making
    (Georgia Institute of Technology, 2022-12-14) Parmar, Sweta
    Model-Based Decision Support Systems (MDSS) are prominent in many professional domains of high consequence, such as aeronautics, emergency management, military command and control, healthcare, nuclear operations, intelligence analysis, and maritime operations. An MDSS generally uses a simplified model of the task and the operator to impose structure to the decision-making situation and provide information cues to the operator that is useful for the decision-making task. Models are simplifications, can be misspecified, and have errors. Adoption and use of these errorful models can lead to the impoverished decision-making of users. I term this impoverished state of the decision-maker model blindness. A series of two experiments were conducted to investigate the consequences of model blindness on human decision-making and performance and how those consequences can be mitigated via an explainable AI (XAI) intervention. The experiments implemented a simulated route recommender system as an MDSS with a true data-generating model (unobservable world model). In Experiment 1, the true model generating the recommended routes was misspecified to different levels to impose model blindness on users. In Experiment 2, the same route-recommender system was employed with a mitigation technique to overcome the impact of model-misspecifications on decision-making. Overall, the results of both experiments provide little support for performance degradation due to model blindness imposed by misspecified systems. The XAI intervention provided valuable insights into how participants adjusted their decision-making to account for bias in the system and deviated from choosing the model-recommended alternatives. The participants' decision strategies revealed that they could understand model limitations from feedback and explanations and could adapt their strategy to account for those misspecifications. The results provide strong support for evaluating the role of decision strategies in the model blindness confluence model. These results help establish a need for carefully evaluating model blindness during the development, implementation, and usage stages of MDSS.
  • Item
    Examining Social Influence's Effect on Decision-Making and Bayesian Truth Serum
    (Georgia Institute of Technology, 2022-04-28) Sukernek, Justin
    Decision-making—whether individual or in groups—can be subject to revision based on social influence, often pulling one’s opinions towards the apparent consensus (Mason, Conrey, & Smith, 2007). Social influence has been shown to damage the effectiveness of wisdom of the crowd, suggesting that perhaps the crowd is wise—but only when the members do not interact with each other (Lorenz, Rauhut, Schweitzer, & Helbing, 2011). An interesting, unexplored method to study the effect of social influence would be to apply it to the Bayesian truth serum (BTS), a multi-faceted measure of judgment ability. In its pure application, the truth serum is both a measure of judgment and a way to increase truth-telling and information quality, but currently it is unclear if social influence may have a positive or negative effect on the serum’s effectiveness (Frank, Cebrian, Pickard, & Rahwan, 2017). I conduct a multi-experiment study to elucidate further the possible adverse effects of social influence, and test Bayesian truth serum’s robustness when combined with the influence of others’ opinions. In combination, the five experiments show evidence for social influence disinforming participants; this disinformation effect appears to be detrimental to the Bayesian truth serum. Finally, these experiments cast doubt on the Bayesian truth serum’s predictive ability in several different task contexts. Additionally, in one experiment we find evidence that disagreeing with social influence improves reasoning ability. Overall, this study contributes to the social influence, disinformation, and BTS literatures.
  • Item
    AN ERP STUDY OF THE NEURAL CORRELATES UNDERLYING HYPOTHESIS GENERATION AND WORKING MEMORY
    (Georgia Institute of Technology, 2020-05) Farooq, Shereen
    Hypothesis generation is the process by which individuals formulate explanations for data found in their environment and evaluating the accuracy of each hypothesis generated is known as a probability judgement. Previous research in decision making has linked hypothesis generation to working memory. This experiment aimed to measure the neural correlates underlying working memory during hypothesis generation in a decision making task. EEG technology was used to measure neural activity and the signals of interest were P300 and CDA. Participants were trained to learn a number of cause-effect relationships between stimuli. Later, participants were asked to make judgements about which causes may have been responsible for an observed effect by remembering the locations of relevant causes in a briefly displayed visual array. The results demonstrate that probability judgements were negatively correlated to the number of relevant hypothesis. The results also show that the peak P300 amplitude did not reveal any significant differences between the ‘Effect’ cues, and the peak P300 amplitude was greatest for Cue 4 which had a total of three relevant hypotheses associated with it. This work can be used to better understand how working memory underlies our everyday decision making.
  • Item
    Effects of probabilistic flight-route risk estimates for enhanced decisions (FRREED) on aeronautical weather-hazard decision-making
    (Georgia Institute of Technology, 2020-03-31) Parmar, Sweta
    A tool commonly used to aid the navigational decisions of pilots to avoid weather hazards is Next Generation Radar (NEXRAD), which provides information about geographically referenced precipitation. However, this tool is limited because, when pilots use NEXRAD, they have to infer the uncertainty in the meteorological information for both understanding current hazards as well as extrapolating the impact of future conditions. Recent advancements in meteorology modeling afford the possibility of providing uncertainty information concerning hazardous weather for the current flight. Although probabilistic weather products do not exist in today’s cockpit, it is critical to evaluate how operators might use or misuse such products when incorporating uncertainty information in their decision-making. In addition, it is important to study how accurate a probabilistic decision aid needs to be for effective use by operators. Although there are systematic biases that plague professional’s use of uncertainty information, there is evidence that presenting forecast uncertainty can improve weather-related decision-making. The current study investigates a simulated probabilistic component of a decision aid that renders flight-path risk as a probability that the route will come within 20 nmi radius (FAA recommended safety distance) of hazardous weather within the next 45 minutes of flight. The study evaluates four NEXRAD displays integrated with Flight-Route Risk Estimates for Enhanced Decisions (FRREED) providing varying levels of support. The “no” support condition has no FRREED (the NEXRAD only condition). The “baseline” support condition employs a FRREED whose accuracy is consistent with current capability in meteorological modeling. The “moderate” support condition employs a FRREED whose accuracy is likely at the top of what is achievable in meteorology in the near future. The “high” support display provides a level of support that is likely unachievable in an aviation weather decision-making context without significant technological innovation. The results indicate that operators did rely on the FRREED to improve their performance over the no-support condition (NEXRAD only). The level of performance of the operators improved in terms of both calibration and resolution as the aids increased in accuracy. I will discuss the implications of the findings for the safe introduction of probabilistic decision aids in future general aviation cockpits.
  • Item
    Hypothesis-guided testing behavior: The role of generation, meta-cognition, and search
    (Georgia Institute of Technology, 2020-03-13) Illingworth, David Anthony
    Hypothesis testing is the act of acquiring information to challenge or promote a decision-maker’s beliefs (i.e., hypotheses) in diagnostic tasks. To date, theorists have conceptualized this behavior as a consequence of implementing one of many possible heuristics for selecting tests, each tailored to optimize some task-relevant goal (e.g., reduce the likelihood of an erroneous diagnosis). Heuristics can account for a number of observed testing phenomena (e.g., pseudo-diagnostic search), but have difficulty explaining more nuanced testing behavior such as decisions to terminate data acquisition. Moreover, current theory has yet to address how updating a decision-maker’s beliefs influences test preference, as hypothesis testing is often studied independent of other events inherent to hypothesis evaluation. The current work examined the role of belief in testing and search termination by evaluating a novel extension of the HyGene architecture (Thomas, Dougherty, Sprenger & Harbison, 2008) built as a cognitive process account for hypothesis testing. Experiments 1 and 2 found limited support for hypothesis-driven valuation, as participants showed minimal signs of sensitivity to the diagnostic value of information depositories. Experiment 3 revealed a relation between belief and foraging duration such that less confidence early in a trial predicted more test exploitation. Model fitting indicated participants implemented a conservative threshold when determining the value of continued testing. Experiment 4 revealed cost-sensitivity in testing behavior, as well as an experience-driven contrast effect. Participants who experienced high costs early in the experiment generally engaged in less testing behavior than those who experienced low costs. The current work provides mild support for the predictions of the HyGene architecture, but clearly demonstrate a role for metacognitive self-assessment in decisions to terminate search and highlight the interaction of access costs with experience of costs when people perceive the value of engaging in testing behavior.