Title:
Constructing and evaluating executable models of collective behavior

Thumbnail Image
Author(s)
Hrolenok, Brian Paul
Authors
Advisor(s)
Balch, Tucker
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to
Abstract
Multiagent simulation (MAS) can be a valuable tool for biologists and ethologists studying collective animal behavior. However, constructing models for simulation is often a time-consuming manual task. Current state-of-the-art multitarget tracking algorithms can now provide high accuracy, high density tracking data of groups of research animals. Techniques from machine learning should be able to leverage the wealth of information such data provides in order to automatically find good models of collective behavior that can be executed in simulation. However, models trained using traditional single-step loss functions can lead to behaviors that are qualitatively dissimilar to the target behavior, while Expectation Maximization (EM) methods computed over full trajectories are subject to local suboptima. These problems are particularly compounded in the case of multiple interacting agents, as in collective behaviors which are the focus of this dissertation. It is useful to examine two specific categories of collective behavior: stochastic behaviors and stateful behaviors, to illustrate the need for new learning techniques and evaluation criteria. Stochastic behaviors can be captured by modeling the distribution of behavior, while models with behavioral state can capture more complex behaviors that switch between multiple low-level modes. The schooling behavior of fish, and the foraging behavior of ants provide examples through which new models and learning methods are explored, and this exploration leads naturally to a novel quantitative evaluation framework based on the statistical similarity between the observed behaviors called Behavioral Divergence. This dissertation describes methods for building and learning executable models, the trade-offs between their strengths and weaknesses, introduces a novel quantitative evaluation framework called Behavioral Divergence that complements existing approaches, and experimentally compares Behavioral Divergence with predictive performance.
Sponsor
Date Issued
2018-10-19
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI