Person:
Hoffman, Judy

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    Understanding and Mitigating Bias in Vision Systems
    (Georgia Institute of Technology, 2021-10-06) Hoffman, Judy
    As visual recognition models are developed across diverse applications; we need the ability to reliably deploy our systems in a variety of environments. At the same time, visual models tend to be trained and evaluated on a static set of curated and annotated data which only represents a subset of the world. In this talk, I will then cover techniques for transferring information between different visual environments and across different semantic tasks thereby enabling recognition models to generalize to previously unseen worlds, such as from simulated to real-world driving imagery. Finally, I'll touch on the pervasiveness of dataset bias and how this bias can adversely affect underrepresented subpopulations.
  • Item
    A Discussion on Fairness in Machine Learning with Georgia Tech Faculty
    ( 2019-11-06) Cummings, Rachel ; Desai, Devan ; Gupta, Swati ; Hoffman, Judy
    Fairness in machine learning and artificial intelligence is a hot, and important topic in tech today. Join Georgia Tech faculty members Judy Hoffman, Rachel Cummings, Deven Desai, and Swati Gupta for a panel discussion on their work in regards to fairness and their motivations behind it. Sponsored by the Machine Learning Center at Georgia Tech.
  • Item
    Weakly Supervised Learning of Object Segmentations from Web-Scale Video
    (Georgia Institute of Technology, 2012-10) Hartmann, Glenn ; Grundmann, Matthias ; Hoffman, Judy ; Tsai, David ; Kwatra, Vivek ; Madani, Omid ; Vijayanarasimhan, Sudheendra ; Essa, Irfan ; Rehg, James M. ; Sukthankar, Rahul
    We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Specifically, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as "dog", without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classifiers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classifiers are further refined using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we confirm that our proposed methods can learn good object masks just by watching YouTube.