Title:
Thompson Sampling for learning in online decision making

Thumbnail Image
Author(s)
Agrawal, Shipra
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Collections
Supplementary to
Abstract
Modern online marketplaces feed themselves. They rely on historical data to optimize content and user-interactions, but further, the data generated from these interactions is fed back into the system and used to optimize future interactions. As this cycle continues, good performance requires algorithms capable of learning actively through sequential interactions, systematically experimenting to improve future performance, and balancing this experimentation with the desire to make decisions with most immediate benefit. Thompson Sampling is a surprisingly simple and flexible Bayesian heuristic for handling this exploration-exploitation tradeoff in online decision problems. While this basic algorithmic technique can be traced back to 1933, the last five years have seen an unprecedented growth in the theoretical understanding as well as commercial interest in this method. In this talk, I will discuss our work in design and analysis of Thompson Sampling based algorithms for several classes of multi-armed bandits, online assortment selection, and reinforcement learning learning problems. We demonstrate that natural versions of the Thompson Sampling heuristic achieve near-optimal theoretical performance bounds for these problems, along with attractive empirical performance. This talk is based on joint works with Vashist Avadhanula, Navin Goyal, Vineet Goyal, Randy Jia, and Assaf Zeevi.
Sponsor
Date Issued
2019-09-23
Extent
56:16 minutes
Resource Type
Moving Image
Resource Subtype
Lecture
Rights Statement
Rights URI