Organizational Unit:
GVU Center

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Computational Models of Human-Like Skill and Concept Formation
    (Georgia Institute of Technology, 2023-04-13) MacLellan, Christopher J.
    The AI community has made significant strides in developing artificial systems with human-level proficiency across various tasks. However, the learning processes in most systems differ vastly from human learning, often being substantially less efficient and flexible. For instance, training large language models demands massive amounts of data and power, and updating them with new information remains challenging. In contrast, humans employ highly efficient incremental learning processes to continually update their knowledge, enabling them to acquire new knowledge with minimal examples and without overwriting prior learning. In this talk, I will discuss some of the key learning capabilities humans exhibit and present three research vignettes from my lab that explore the development of computational systems with these capabilities. The first two vignettes explore computational models of skill learning from worked examples, correctness feedback, and verbal instruction. The third vignette investigates computational models of concept formation from natural language corpora. In conclusion, I will discuss future research directions and a broader vision for how cognitive science and cognitive systems research can lead to new AI advancements.
  • Item
    Visualization of Exception Handling Constructs to Support Program Understanding
    (Georgia Institute of Technology, 2009) Shah, Hina ; Görg, Carsten ; Harrold, Mary Jean
    This paper presents a new visualization technique for supporting the understanding of exception-handling constructs in Java programs. To understand the requirements for such a visualization, we surveyed a group of software developers, and used the results of that survey to guide the creation of the visualizations. The technique presents the exception-handling information using three views: the quantitative view, the flow view, and the contextual view. The quantitative view provides a high-level view that shows the throw-catch interactions in the program, along with relative numbers of these interactions, at the package level, the class level, and the method level. The flow view shows the type-throw-catch interactions, illustrating information such as which exception types reach particular throw statements, which catch statements handle particular throw statements, and which throw statements are not caught in the program. The contextual view shows, for particular type-throw-catch interactions, the packages, classes, and methods that contribute to that exception-handling construct. We implemented our technique in an Eclipse plugin called EnHanCe and conducted a usability and utility study with participants in industry.
  • Item
    Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing
    (Georgia Institute of Technology, 2005) Dellaert, Frank
    Solving the SLAM problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. We investigate smoothing approaches as a viable alternative to extended Kalman filter-based solutions to the problem. In particular, we look at approaches that factorize either the associated information matrix or the measurement matrix into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact, they can be used in either batch or incremental mode, are better equipped to deal with non-linear process and measurement models, and yield the entire robot trajectory, at lower cost. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. In this paper, we present the theory underlying these methods, an interpretation of factorization in terms of the graphical model associated with the SLAM problem, and simulation results that underscore the potential of these methods for use in practice.
  • Item
    Dirichlet Process based Bayesian Partition Models for Robot Topological Mapping
    (Georgia Institute of Technology, 2004) Ranganathan, Ananth ; Dellaert, Frank
    Robotic mapping involves finding a solution to the correspondence problem. A general purpose solution to this problem is as yet unavailable due to the combinatorial nature of the state space. We present a framework for computing the posterior distribution over the space of topological maps that solves the correspondence problem in the context of topological mapping. Since exact inference in this space is intractable, we present two sampling algorithms that compute sample-based representations of the posterior. Both the algorithms are built on a Bayesian product partition model that is derived from the mixture of Dirichlet processes model. Robot experiments demonstrate the applicability of the algorithms.
  • Item
    An MCMC-based Particle Filter for Tracking Multiple Interacting Targets
    (Georgia Institute of Technology, 2003) Khan, Zia ; Balch, Tucker ; Dellaert, Frank
    We describe a Markov chain Monte Carlo based particle filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and/or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint particle filter. Since a joint particle filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the particle filter with an MCMC sampling step. The resulting filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.