Series
ML@GT Seminar Series

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 52
  • Item
    The Seeing Eye Robot: Developing a Human-Aware Artificial Collaborator
    ( 2021-10-27) Mirksy, Reuth
    Automated care systems are becoming more tangible than ever: recent breakthroughs in robotics and machine learning can be used to address the need for automated care created by the increasing aging population. However, such systems require overcoming several technological, ethical, and social challenges. One inspirational manifestation of these challenges can be observed in the training of seeing-eye dogs for visually impaired people. A seeing-eye dog is not just trained to obey its owner, but also to “intelligently disobey”: if it is given an unsafe command from its handler, it is taught to disobey it or even insist on a different course of action. This paper proposes the challenge of building a seeing-eye robot, as a thought-provoking use-case that helps identify the challenges to be faced when creating behaviors for robot assistants in general. Through this challenge, this paper delineates the prerequisites that an automated care system will need to have in order to perform intelligent disobedience and to serve as a true agent for its handler.
  • Item
    Generalized Energy-Based Models
    ( 2021-10-13) Gretton, Arthur
    Arthur Gretton will describe Generalized Energy Based Models (GEBM) for generative modeling. These models combine two trained components: a base distribution (generally an implicit model, as in a Generative Adversarial Network), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator"). Furthermore, unlike classical energy-based models, the GEBM energy is defined even when the support of the model and data do not overlap. Samples from the trained model can be obtained via Langevin diffusion-based methods (MALA, UAL, HMC). Empirically, the GEBM samples on image-generation tasks are of better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity.
  • Item
    Structured Prediction - Beyond Support Vector Machine and Cross Entropy
    ( 2021-09-29) Bach, Francis
    Many classification tasks in machine learning lie beyond the classical binary and multi-class classification settings. In those tasks, the output elements are structured objects made of interdependent parts, such as sequences in natural language processing, images in computer vision, permutations in ranking or matching problems, etc. The structured prediction setting has two key properties that makes it radically different from multi-class classification, namely, the exponential growth of the size of the output space with the number of its parts, and the cost-sensitive nature of the learning task, as prediction mistakes are not equally costly. In this talk, I will present recent work on the design on loss functions that combine numerical efficiency and statistical consistency (joint work with Alessandro Rudi, Alex Nowak-Vila, Vivien Cabannes).
  • Item
    Towards a Theory of Representation Learning for Reinforcement Learning
    ( 2021-09-15) Agarwal, Alekh
    Provably sample-efficient reinforcement learning from rich observational inputs remains a key open challenge in research. While impressive recent advances have allowed the use of linear modelling while carrying out sample-efficient exploration and learning, the handling of more general non-linear models remains limited. In this talk, we study reinforcement learning using linear models, where the features underlying the linear model are learned, rather than apriori specified. While the broader question of representation learning for useful embeddings of complex data has seen tremendous progress, doing so in reinforcement learning presents additional challenges: good representations cannot be discovered without adequate exploration, but effective exploration is challenging in the absence of good representations. Concretely, we study this question in the context of low-rank MDPs [Jiang et al., 2017, Jin et al., 2019, Yang and Wang, 2019], where the features underlying a state-action pair are not assumed to be known, unlike most prior works. We develop two styles of methods, model-based and model-free. For the model-based method, we learn an approximate factorization of the transition model, plan within the model to obtain a fresh exploratory policy and then update our factorization with additional data. In the model-free technique, we learn features so that quantities such as value functions at subsequent states can be predicted linearly in those features. In both approaches, we address the intricate coupling between exploration and representation learning, and provide sample complexity guarantees. More details can be found at https://arxiv.org/abs/2006.10814 and https://arxiv.org/abs/2102.07035. [Based on joint work with Jingling Chen, Nan Jiang, Sham Kakade, Akshay Krishnamurthy, Aditya Modi and Wen Sun]
  • Item
    Learning Locomotion: From Simulation to Real World
    ( 2021-09-01) Tan, Jie
    Deep Reinforcement Learning (DRL) holds the promise of designing complex robotic controllers automatically. In this talk, I will discuss two different approaches to apply deep reinforcement learning to learn locomotion controllers for legged robots. The first approach is through sim-to-real transfer. Due to safety concerns and limited data, most of the training is conducted in simulation. However, controllers learned in simulation usually perform poorly on real robots. I will present a set of techniques to overcome this sim-to-real gap. The second approach is to directly learn in the real world. Due to the complexity and diversity of the real environments, building a simulation that can faithfully model the real world is not always feasible. Having the ability to learn on the fly and adapt quickly in real-world scenarios is crucial for large-scale deployment of robots. I will discuss the challenges of training legged robots in the real world and various ways to address these challenges.
  • Item
    Generative models based on point processes for financial time series simulation
    ( 2021-04-07) Wei, Qi
    In this seminar, I will talk about generative models based on point processes for financial time series simulation. Specifically, we focus on a recently developed state-dependent Hawkes (sdHawkes) process to model the limit order book dynamics [Morariu-Patrichi, 2018]. The sdHawkes model consists of an oracle Hawkes process and a state process following Markov transition. The Hawkes and state processes are fully coupled, which enables the point process captures the self- and cross-excitation as well as the interaction between events and states. We will go through the model formulation in sdHawkes, the simulation of sdHawkes, its maximum likelihood estimation, and more importantly, its application to high-frequency data modeling that captures the interactions between the order flow and the state of the current market. Morariu-Patrichi, Maxime, and Mikko S. Pakkanen. "State-dependent Hawkes processes and their application to limit order book modelling." arXiv preprint arXiv:1809.08060 (2018).
  • Item
    You can lead a horse to water...: Representing vs. Using Features in Neural NLP
    ( 2021-03-24) Pavlick, Ellie
    A wave of recent work has sought to understand how pretrained language models work. Such analyses have resulted in two seemingly contradictory sets of results. On one hand, work based on "probing classifiers" generally suggests that SOTA language models contain rich information about linguistic structure (e.g., parts of speech, syntax, semantic roles). On the other hand, work which measures performance on linguistic "challenge sets" shows that models consistently fail to use this information when making predictions. In this talk, I will present a series of results that attempt to bridge this gap. Our recent experiments suggest that the disconnect is not due to catastrophic forgetting nor is it (entirely) explained by insufficient training data. Rather, it is best explained in terms of how "accessible" features are to the model following pretraining, where "accessibility" can be quantified using an information-theoretic interpretation of probing classifiers.
  • Item
    Compressed computation of good policies in large MDPs
    ( 2021-03-10) Szepesvari, Csaba
    Markov decision processes (MDPs) is a minimalist framework to capture that many tasks require long-term plans and feedback due to noisy dynamics. Yet, as a result MDPs lack structure and as such planning and learning in MDPs with the typically enormous state and action spaces is strongly intractable; no algorithm can avoid Bellman's curse of dimensionality in the worst case. However, as recognized already by Bellman and his co-workers at the advent of our field, for many problem of practical interest, the optimal value function of an MDP is well approximated by just using a few basis functions, such as those that are standardly used in numerical calculations. As knowing the optimal value function is essentially equivalent to knowing how to act optimally, one hopes that this observation can be turned into efficient algorithms as there are only a few coefficients to compute. If this is possible, we can think of the resulting algorithms as performing computations with a compressed form of the value functions. While many algorithms have been proposed as early as in the 1960s, until recently not much has been known about whether these compressed computations are possible and when. In this talk, I will discuss a few recent results (some positive, some negative) that are concerned with these compressed computations and conclude with some open problems. As we shall see, still today, there are more open questions than questions that have been satisfactorily answered.
  • Item
    Learning Tree Models in Noise: Exact Asymptotics and Robust Algorithms
    ( 2021-02-10) Tan, Vincent Y. F.
    We consider the classical problem of learning tree-structured graphical models but with the twist that the observations are corrupted in independent noise. For the case in which the noise is identically distributed, we derive the exact asymptotics via the use of probabilistic tools from the theory of strong large deviations. Our results strictly improve those of Bresler and Karzand (2020) and Nikolakakis et al. (2019) and demonstrate keen agreement with experimental results for sample sizes as small as that in the hundreds. When the noise is non-identically distributed, Katiyar et al. (2020) showed that although the exact tree structure cannot be recovered, one can recover a "partial" tree structure; that is, one that belongs to the equivalence class containing the true tree. We propose Symmetrized Geometric Averaging (SGA), a statistically robust algorithm for partial tree recovery. We provide error exponent analyses and extensive numerical results on a variety of trees to show that the sample complexity of SGA is significantly better than the algorithm of Katiyar et al. (2020). SGA can be readily extended to Gaussian models and is shown via numerical experiments to be similarly superior.
  • Item
    Interpretable latent space and inverse problem in deep generative models
    ( 2021-01-27) Zhou, Bolei
    Recent progress in deep generative models such as Generative Adversarial Networks (GANs) has enabled synthesizing photo-realistic images, such as faces and scenes. However, it remains much less explored on what has been learned in the deep generative representation and why diverse realistic images can be synthesized. In this talk, I will present our recent series work from GenForce (https://genforce.github.io/) on interpreting and utilizing latent space of the GANs. Identifying these semantics not only allows us to better understand the inner working of the deep generative models but also facilitates versatile image editings. I will also briefly talk about the inverse problem (how to invert a given image into the latent code) and the fairness of the generative model.