Title:
Integrating New Knowledge into a Neural Network without Catastrophic Interference: Computational and Theoretical Investigations in a Hierarchically Structured Environment

Thumbnail Image
Author(s)
McClelland, James L.
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Series
Collections
Supplementary to
Abstract
According to complementary learning systems theory, integrating new memories into a multi-layer neural network without interfering with what is already known depends on interleaving presentation of the new memories with ongoing presentations of items previously learned. This putative dependence is both costly for machine learning and biologically implausible for real brains which are unlikely to have sufficient time for such massive interleaving, even during sleep. We use deep linear neural networks in hierarchically structured environments previously analyzed by Saxe, McClelland, and Ganguli () to gain new insights into how integration of new knowledge might be made more efficient. For this type of environment, its content can be described by the singular value decomposition (SVD) of the environment's input-output covariance matrix, in which each successive dimension corresponds to categorical split in the hierarchical environment. Prior work showed that deep linear networks are sufficient to learn the content of the environment, and they do so in a stage-line way, with each dimension strength rising from near-zero to its maximum strength after a delay inversely proportional to the strength of the dimension, as previously demonstrated by Saxe et al capturing patterns previously observed in deeper non-linear neural networks by Rogers and McClelland (2004). Several observations are then accessible when we consider learning a new item previously not encountered in the micro-environment. (1) The item can be examined in terms of its projection onto the existing structure, and the degree to which it adds a new categorical split. (2) To the extent the item projects onto existing structure, including it in the training corpus leads to the rapid adjustment of the representation of the categories involved, and effectively no adjustment occurs to categories onto which the new item does not project at all. (3) Learning a new split, however, is slow, and its learning dynamics show the same delayed rise to maximum that depends on the dimension's strength. These observations then motivate the development of ideas about how the new information might be acquired efficiently, combining interleaved learning with other strategies.
Sponsor
Date Issued
2019-04-15
Extent
63:03 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI