Investigating the Alignment of AI Evaluation Processes with Human-Centered Design Principles and National Security Imperatives

Author(s)
Venkatesh, Kavya
Advisor(s)
Crooks, Courtney
Editor(s)
Associated Organization(s)
Supplementary to:
Abstract
The intersection of national security imperatives and human-centered design (HCD) principles is critical for developing trustworthy artificial intelligence (AI) systems. The adoption of AI technologies in national security has accelerated, yet their alignment with HCD principles remains a significant challenge. Transparency, fairness, and trust in AI systems are necessary to ensure ethical and effective use, especially when these systems impact high-stakes decision-making. This study aims to investigate how integrating HCD principles can improve the transparency, fairness, and ethical alignment of AI systems within the national security domain. By examining AI systems from both a technical and human-centered perspective, this research seeks to contribute to the development of more reliable and trustworthy AI solutions. Prior studies, such as those by Ozmen Garibay et al. (2023), have emphasized the need for such integration, but gaps remain in how these systems align with specific security and ethical considerations. This thesis will explore these gaps by analyzing existing literature, reviewing AI systems currently in use, and applying thematic analysis to evaluate the alignment between HCD principles and national security requirements.
Sponsor
Date
Extent
Resource Type
Text
Resource Subtype
Undergraduate Research Option Thesis
Rights Statement
Rights URI