General and interpretable models for inferring dynamical computation in biological neural networks
Author(s)
Sedler, Andrew R.
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
The sequential autoencoder (SAE) has proven to be a powerful deep architecture for modeling large-scale electrophysiological recordings, with a variety of applications in neuroscience and neural engineering. In particular, latent factor analysis via dynamical systems (LFADS) explicitly models latent dynamics and inputs to infer firing rates with state-of-the-art accuracy. However, effectively fitting such models requires time consuming and resource-intensive hyperparameter tuning with reference to supervisory information (e.g. behavioral data) for each new dataset. Additionally, the conditions under which the dynamics learned by the SAE faithfully reflect those of the underlying biological system are unclear and under-explored, limiting the insights that can be gained from internal components of the model.
The objectives of this thesis are to simplify training of deep, dynamics-based neural population models on binned spiking activity and to improve the interpretability of the latent dynamics they learn. The first aim of this research was to develop a framework for robust training of SAEs on neural data. In Chapter 2, we present a novel regularization strategy and an efficient hyperparameter tuning approach which allowed us to reliably obtain high-performing models on a wide variety of datasets. The second aim was to evaluate and improve the interpretability of the latent dynamics learned by SAEs. In Chapter 3, we show that widely-used recurrent neural networks (RNNs) struggle to accurately recover dynamics from synthetic datasets, and that neural ordinary differential equations solve many of these issues. The last aim was to address several of the practical challenges of training and applying SAEs in neuroscience. Finally, we describe two open-source projects: a simple, modular, and extensible implementation of LFADS (Chapter 4) and a deployment framework that makes it easier to leverage managed infrastructure for large-scale training (Chapter 5).
Sponsor
Date
2023-04-30
Extent
Resource Type
Text
Resource Subtype
Dissertation