Title:
Evaluating visual conversational agents via cooperative human-AI games

dc.contributor.advisor Parikh, Devi
dc.contributor.advisor Batra, Dhruv
dc.contributor.advisor Lee, Stefan
dc.contributor.author Chattopadhyay, Prithvijit
dc.contributor.department Computer Science
dc.date.accessioned 2019-05-29T14:04:43Z
dc.date.available 2019-05-29T14:04:43Z
dc.date.created 2019-05
dc.date.issued 2019-04-26
dc.date.submitted May 2019
dc.date.updated 2019-05-29T14:04:44Z
dc.description.abstract As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. This thesis introduces a cooperative game – GuessWhich – to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call Alice, is provided an image which is unseen by the human. Following a brief description of the image, the human questions Alice about this secret image to identify it from a fixed pool of images. We measure performance of the human-Alice team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with Alice. We compare performance of the human-Alice teams for two versions of Alice. Our human studies suggest a counter-intuitive trend – that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. As this implies a mismatch between benchmarking of AI in isolation and in the context of human-AI teams, this thesis further motivates the need to evaluate AI additionally in the latter setting to effectively leverage the progress in AI for efficient human-AI teams.
dc.description.degree M.S.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/61308
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Visual conversational agents
dc.subject Visual dialog
dc.subject Human-AI teams
dc.subject Reinforcement learning
dc.subject Machine learning
dc.subject Computer vision
dc.subject Artificial intelligence
dc.title Evaluating visual conversational agents via cooperative human-AI games
dc.type Text
dc.type.genre Thesis
dspace.entity.type Publication
local.contributor.advisor Parikh, Devi
local.contributor.advisor Batra, Dhruv
local.contributor.corporatename College of Computing
local.contributor.corporatename School of Computer Science
relation.isAdvisorOfPublication 2b8bc15b-448f-472b-8992-ca9862368cad
relation.isAdvisorOfPublication bbee09e1-a4fa-4d99-9dfd-b0605fea0f11
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isOrgUnitOfPublication 6b42174a-e0e1-40e3-a581-47bed0470a1e
thesis.degree.level Masters
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
CHATTOPADHYAY-THESIS-2019.pdf
Size:
7.78 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.88 KB
Format:
Plain Text
Description: