Series
Kavli Brain Forum

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    Networks Thinking Themselves
    ( 2018-11-28) Bassett, Danielle
    Bassett's group studies biological, physical, and social systems by using and developing tools from network science and complex systems theory. Our broad goal is to isolate problems at the intersection of basic science, engineering, and clinical medicine that can be tackled using systems-level approaches. Recent examples include predicting the extent of learning from human brain networks, resolving the evolution of the neuronal synapse via genetic interaction networks, determining bulk material properties from mesoscale force networks, and isolating individual drivers of collective social behavior during evacuations. In these contexts, we seek to develop new mathematical methods for the principled characterization of temporally dynamic, spatially embedded, and multiscale networked systems, with the goal of predicting system behavior and designing perturbations to affect a specific outcome. A current focal interest of the group lies in network neuroscience. We develop analytic tools to probe the hard-wired pathways and transient communication patterns inside of the brain in an effort to identify organizational principles, to develop novel diagnostics of disease, and to design personalized therapeutics for rehabilitation and treatment of brain injury, neurological disease, and psychiatric disorders.
  • Item
    Finding Language (and Language Learning) in the Brain
    ( 2018-08-29) Fyshe, Alona
    Understanding a native language is near effortless for fluent adults. But learning a new language takes dedication and hard word. In this talk, I will describe an experiment during which adult participants learned a new (artificial) language through a reinforcement learning paradigm while we collected EEG (Electroencephalography) data. We found that 1) we could detect a reward positivity (an EEG signal correlated with a participant receiving positive feedback) when participants correctly identified a symbol's meaning, and 2) the reward positivity diminishes for subsequent correct trials. Using a machine learning approach, we found that 3) we could detect neural correlates of word meaning as the mapping from native to new language is learned; and 4) the localization of the neural representations is heavily distributed throughout the brain. Together this is evidence that learning can be detected in the brain using EEG, and that the contents of a newly learned concept can be detected.
  • Item
    Revealing nonlinear computation by analyzing choices
    ( 2017-12-06) Pitkow, Xaq
    The sensory data about most natural task-relevant variables is confounded by task-irrelevant sensory variations, called nuisance variables. To be useful, the sensory signals that encode the relevant variables must be untangled from the nuisance variables through nonlinear recoding transformations, before the brain can use or decode them to drive behaviors. The information to be untangled is represented in the cortex by the activity of many neurons, forming a nonlinear population code. Here we provide a new theory about these nonlinear codes and their relationship to nuisance variables. This theory obeys fundamental mathematical limitations on information content that are inherited from the sensory periphery, producing redundant codes when there are many more cortical neurons than sensory neurons. The theory predicts a simple relationship between fluctuating neural activity and behavioral choices if the brain uses its nonlinear population codes optimally. When primates discriminate between rotations of natural images, neural responses in visual cortex follow this predicted pattern.
  • Item
    Using Artificial-Intelligence-Driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization
    ( 2017-10-11) Yamins, Daniel
    Human behavior is founded on the ability to identify meaningful entities in complex noisy data streams that constantly bombard the senses. For example, in vision, retinal input is transformed into rich object-based scenes; in audition, sound waves are transformed into words and sentences. In this talk, I will describe my work using computational models to help uncover how sensory cortex accomplishes these enormous computational feats. The core observation underlying my work is that optimizing neural networks to solve challenging real-world artificial intelligence (AI) tasks can yield predictive models of the cortical neurons that support these tasks. I will first describe how we leveraged recent advances in AI to train a neural network that approaches human-level performance on a challenging visual object recognition task. Critically, even though this network was not explicitly fit to neural data, it is nonetheless predictive of neural response patterns of neurons in multiple areas of the visual pathway, including higher cortical areas that have long resisted modeling attempts. Intriguingly, an analogous approach turns out be helpful for studying audition, where we recently found that neural networks optimized for word recognition and speaker identification tasks naturally predict responses in human auditory cortex to a wide spectrum of natural sound stimuli, and help differentiate poorly understood non-primary auditory cortical regions. Together, these findings suggest the beginnings of a general approach to understanding sensory processing the brain. I'll give an overview of these results, explain how they fit into the historical trajectory of AI and computational neuroscience, and discuss future questions of great interest that may benefit from a similar approach.