Person:
Chau, Duen Horng

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    ML@GT Lab presents LAB LIGHTNING TALKS 2020
    ( 2020-12-04) AlRegib, Ghassan ; Chau, Duen Horng ; Chava, Sudheer ; Cohen, Morris B. ; Davenport, Mark A. ; Desai, Deven ; Dovrolis, Constantine ; Essa, Irfan ; Gupta, Swati ; Huo, Xiaoming ; Kira, Zsolt ; Li, Jing ; Maguluri, Siva Theja ; Pananjady, Ashwin ; Prakash, B. Aditya ; Riedl, Mark O. ; Romberg, Justin ; Xie, Yao ; Zhang, Xiuwei
    Labs affiliated with the Machine Learning Center at Georgia Tech (ML@GT) will have the opportunity to share their research interests, work, and unique aspects of their lab in three minutes or less to interested graduate students, Georgia Tech faculty, and members of the public. Participating labs include: Yao’s Group - Yao Xie, H. Milton Stewart School of Industrial Systems and Engineering (ISyE); Huo Lab - Xiaoming Huo, ISyE; LF Radio Lab – Morris Cohen, School of Electrical Computing and Engineering (ECE); Polo Club of Data Science – Polo Chau, CSE; Network Science – Constantine Dovrolis, School of Computer Science; CLAWS – Srijan Kumar, CSE; Control, Optimization, Algorithms, and Randomness (COAR) Lab – Siva Theja Maguluri, ISyE; Entertainment Intelligence Lab and Human Centered AI Lab – Mark Riedl, IC; Social and Language Technologies (SALT) Lab – Diyi Yang, IC; FATHOM Research Group – Swati Gupta, ISyE; Zhang's CompBio Lab – Xiuwei Zhang, CSE; Statistical Machine Learning - Ashwin Pananjady, ISyE and ECE; AdityaLab - B. Aditya Prakash, CSE; OLIVES - Ghassan AlRegib, ECE; Robotics Perception and Learning (RIPL) – Zsolt Kira, IC; Eye-Team - Irfan Essa, IC; and Mark Davenport, ECE.
  • Item
    Towards Secure and Interpretable AI: Scalable Methods, Interactive Visualizations, and Practical Tools
    (Georgia Institute of Technology, 2019-08-29) Chau, Duen Horng
    We have witnessed tremendous growth in Artificial Intelligence (AI) and machine learning (ML) recently. However, research shows that AI and ML models are often vulnerable to adversarial attacks, and their predictions can be difficult to understand, evaluate and ultimately act upon. Discovering real-world vulnerabilities of deep neural networks and countermeasures to mitigate such threats has become essential to successful deployment of AI in security settings. We present our joint works with Intel which include the first targeted physical adversarial attack (ShapeShifter) that fools state-of-the-art object detectors; a fast defense (SHIELD) that removes digital adversarial noise by stochastic data compression; and interactive systems (ADAGIO and MLsploit) that further democratize the study of adversarial machine learning and facilitate real-time experimentation for deep learning practitioners. Finally, we also present how scalable interactive visualization can be used to amplify people’s ability to understand and interact with large-scale data and complex models. We sample from projects where interactive visualization has provided key leaps of insight, from increased model interpretability (Gamut with Microsoft Research), to model explorability with models trained on millions of instances (ActiVis deployed with Facebook), increased usability for non-experts about state-of-the-art AI (GAN Lab open-sourced with Google Brain; went viral!), and our latest work Summit, an interactive system that scalably summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. We conclude by highlighting the next visual analytics research frontiers in AI.
  • Item
    Visual Data Analytics: A Short Tutorial
    ( 2019-08-08) Chau, Duen Horng
  • Item
    Energy and Data Science Academia Talks
    (Georgia Institute of Technology, 2016-09-06) Chau, Duen Horng ; Qiu, Judy