Relationship Extraction Via Language Models for Normativity Analysis

Author(s)
Yang, Ian
Advisor(s)
Editor(s)
Associated Organization(s)
Supplementary to:
Abstract
There are a potentially infinite number of aspects to account for when describing a particular state of the world. While there exist ways for AI systems to model these world states and how they change over time, producing data in a format recognizable to these systems is an extremely tedious process and generally requires human annotation. Recent advancements in natural language processing (NLP) prove that large language models can be effective at extracting information from source texts given carefully engineered prompts; obtaining information about a source is necessary to accurately describe any particular status quo. I propose an architecture that uses these language models to extract relational triples between objects in a specific format of source text: stories. Ordered sequentially, these triples encode character actions, states, and behaviors. I demonstrate that structured information extraction with LLMs widens the data bottleneck of human annotation reliance. I then use LLMs to evaluate the these character action sequences in the context of human social norms, to determine the ability of large language models to reason about normativity.
Sponsor
Date
Extent
Resource Type
Text
Resource Subtype
Undergraduate Research Option Thesis
Rights Statement
Rights URI