Title:
Using Artificial-Intelligence-Driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization

dc.contributor.author Yamins, Daniel
dc.contributor.corporatename Georgia Institute of Technology. Neural Engineering Center en_US
dc.contributor.corporatename Stanford University. Dept. of Psychology en_US
dc.date.accessioned 2017-10-19T16:05:40Z
dc.date.available 2017-10-19T16:05:40Z
dc.date.issued 2017-10-11
dc.description Presented on October 11, 2017 at 3:30 p.m. in the Parker H. Petit Institute for Bioengineering and Biosciences, room 1128. en_US
dc.description Dan Yamins is a computational neuroscientist at Stanford University, where I'm an assistant professor of Psychology and Computer Science, and a faculty scholar at the Stanford Neurosciences Institute. I work on science and technology challenges at the intersection of neuroscience, artificial intelligence, psychology and large-scale data analysis. en_US
dc.description Runtime: 74:32 minutes en_US
dc.description.abstract Human behavior is founded on the ability to identify meaningful entities in complex noisy data streams that constantly bombard the senses. For example, in vision, retinal input is transformed into rich object-based scenes; in audition, sound waves are transformed into words and sentences. In this talk, I will describe my work using computational models to help uncover how sensory cortex accomplishes these enormous computational feats. The core observation underlying my work is that optimizing neural networks to solve challenging real-world artificial intelligence (AI) tasks can yield predictive models of the cortical neurons that support these tasks. I will first describe how we leveraged recent advances in AI to train a neural network that approaches human-level performance on a challenging visual object recognition task. Critically, even though this network was not explicitly fit to neural data, it is nonetheless predictive of neural response patterns of neurons in multiple areas of the visual pathway, including higher cortical areas that have long resisted modeling attempts. Intriguingly, an analogous approach turns out be helpful for studying audition, where we recently found that neural networks optimized for word recognition and speaker identification tasks naturally predict responses in human auditory cortex to a wide spectrum of natural sound stimuli, and help differentiate poorly understood non-primary auditory cortical regions. Together, these findings suggest the beginnings of a general approach to understanding sensory processing the brain. I'll give an overview of these results, explain how they fit into the historical trajectory of AI and computational neuroscience, and discuss future questions of great interest that may benefit from a similar approach. en_US
dc.format.extent 74:32 minutes
dc.identifier.uri http://hdl.handle.net/1853/58841
dc.language.iso en_US en_US
dc.relation.ispartofseries KAVLI Brain Forum
dc.subject Artificial intelligence en_US
dc.subject Auditory cortex en_US
dc.subject Computational neuroscience en_US
dc.subject Visual cortex en_US
dc.title Using Artificial-Intelligence-Driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization en_US
dc.type Moving Image
dc.type.genre Lecture
dspace.entity.type Publication
local.contributor.corporatename Neural Engineering Center
local.relation.ispartofseries Kavli Brain Forum
relation.isOrgUnitOfPublication c2e26044-257b-4ef6-8634-100dd836a06c
relation.isSeriesOfPublication edd29ba5-8370-407b-93c7-2d9c1bcae0cb
Files
Original bundle
Now showing 1 - 3 of 3
No Thumbnail Available
Name:
yamins.mp4
Size:
595.91 MB
Format:
MP4 Video file
Description:
Download video
No Thumbnail Available
Name:
yamins_videostream.html
Size:
985 B
Format:
Hypertext Markup Language
Description:
Streaming video
No Thumbnail Available
Name:
transcription.txt
Size:
75.05 KB
Format:
Plain Text
Description:
Transcription
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.13 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections