Title:
Learning distributed representations in the human brain

Thumbnail Image
Author(s)
Schapiro, Anna
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Collections
Supplementary to
Abstract
The remarkable success of neural network models in machine learning has relied on the use of distributed representations — activity patterns that overlap across related inputs. Under what conditions does the brain also rely on distributed representations for learning? There are benefits and costs to this form of representation: it allows rapid, efficient learning and generalization, but is highly susceptible to interference. We recently developed a neural network model of the hippocampus that proposes that one subregion (CA1) may employ this form of representation, complementing known pattern-separated representations in other subregions. This provides an exciting domain to test ideas about learning with distributed representations, as the hippocampus learns much more quickly than the neocortical areas that have often been proposed to contain these representations. I will present modeling and empirical work that provide support for the idea that parts of the hippocampus do indeed learn using distributed representations. I will also present ideas about how hippocampal and neocortical areas may interact during sleep to further transform these representations over time.
Sponsor
Date Issued
2020-11-19
Extent
62:18 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI