Title:
Solving the Flickering Problem in Modern Convolutional Neural Networks

Thumbnail Image
Author(s)
Sundaramoorthi, Ganesh
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Collections
Supplementary to
Abstract
Deep Learning has revolutionized the AI field. Despite this, there is much progress needed to deploy deep learning in safety critical applications (such as autonomous aircraft). This is because current deep learning systems are not robust to real-world nuisances (e.g., viewpoint, illumination, partial occlusion). In this talk, we take a step in constructing robust deep learning systems by addressing the problem that state-of-the-art Convolution Neural Networks (CNN) classifiers and detectors are vulnerable to small perturbations, including shifts of the image or camera. While various forms of specially engineered “adversarial perturbations” that fool deep learning systems have been well documented, modern CNNs can surprisingly change classification up to 30% probability even for simple 1-pixel shifts of the image. This lack of translational stability seems to be partially the cause of “flickering” in state-of-the-art object detectors applied to video. In this talk, we introduce this phenomena, propose a solution, prove it analytically, validate it empirically, and explain why existing CNNs exhibit this phenomena.
Sponsor
Date Issued
2020-02-12
Extent
49:16 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI