Organizational Unit:
GVU Center

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    An Introduction to Healthcare AI
    (Georgia Institute of Technology, 2024-02-22) Braunstein, Mark
    Healthcare and AI have an intertwined history dating back at least to the 1960's when the first 'cognitive chatbot' acting as a psychotherapist was introduced at MIT. Today, of course, there is enormous interest in and excitement about the potential roles of the latest AI technologies in patient care. There is a parallel concern about the risks. Will human physicians be replaced by intelligent agents? How might such agents benefit patient care short of that? What role will they play for patients. We'll explore this in a far-ranging talk that includes a number of real-world examples of how AI technologies are already being deployed to hopefully benefit those physicians and their patients.
  • Item
    Democratizing Robot Learning and Teaming
    (Georgia Institute of Technology, 2023-09-14) Gombolay, Matthew
    New advances in robotics and autonomy offer a promise of revitalizing final assembly manufacturing, assisting in personalized at-home healthcare, and even scaling the power of earth-bound scientists for robotic space exploration. Yet, in real-world applications, autonomy is often run in the O-F-F mode because researchers fail to understand the human in human-in-the-loop systems. In this talk, I will share exciting research we are conducting at the nexus of human factors engineering and cognitive robotics to inform the design of human-robot interaction. In my talk, I will focus on our recent work on 1) enabling machines to learn skills from and model heterogeneous, suboptimal human decision-makers, 2) “white-box” that knowledge through explainable Artificial Intelligence (XAI) techniques, and 3) scale to coordinated control of stochastic human-robot teams. The goal of this research is to inform the design of autonomous teammates so that users want to turn – and benefit from turning – to the O-N mode.
  • Item
    Computational Models of Human-Like Skill and Concept Formation
    (Georgia Institute of Technology, 2023-04-13) MacLellan, Christopher J.
    The AI community has made significant strides in developing artificial systems with human-level proficiency across various tasks. However, the learning processes in most systems differ vastly from human learning, often being substantially less efficient and flexible. For instance, training large language models demands massive amounts of data and power, and updating them with new information remains challenging. In contrast, humans employ highly efficient incremental learning processes to continually update their knowledge, enabling them to acquire new knowledge with minimal examples and without overwriting prior learning. In this talk, I will discuss some of the key learning capabilities humans exhibit and present three research vignettes from my lab that explore the development of computational systems with these capabilities. The first two vignettes explore computational models of skill learning from worked examples, correctness feedback, and verbal instruction. The third vignette investigates computational models of concept formation from natural language corpora. In conclusion, I will discuss future research directions and a broader vision for how cognitive science and cognitive systems research can lead to new AI advancements.
  • Item
    Deep Thinking about Deepfake Videos: Understanding and Bolstering Humans’ Ability to Detect Deepfakes
    (Georgia Institute of Technology, 2023-03-16) Tidler, Zachary
    “Deepfakes” are videos in which the (usually human) subject of a video has been digitally altered to appear to do or say something that they never actually did or said. Sometimes these manipulations produce innocuous novelties (e.g., testing what it would look like if Will Smith had been cast as “Neo” in the film The Matrix), but far more dangerous use cases have been observed (e.g., producing fake footage of Ukrainian President, Volodymyr Zelenskyy, in which he instructs Ukrainian military forces to surrender on the battlefield). Generating the knowledge and tools necessary to defend against potential harms these videos could impose is likely to rely on contributions from a broad coalition of disciplines, many of which are represented in the GVU. In this week’s Brown Bag presentation, we will offer some real-time demonstrations of deepfake technology and present findings from our work that has largely focused on investigating the psychological factors influencing deepfake detection.
  • Item
    Forms of Accountability at the Intersection of Science and Design: Implications from Ecologies of Care Studies in PTSD and Diabetes
    (Georgia Institute of Technology, 2022-10-20) Arriaga, Rosa I.
    Computing holds the promise of alleviating negative impacts of mental illness and chronic disorders by scaling human effort and best medical practices over time and space. One in five adults is experiencing mental illness and four in ten adults in the US have two or more chronic diseases. The urgent need to manage these conditions calls for robust, and reliable technology that is useful and usable by patients and their caregivers. It calls for accountability at the intersection of science and design. In this talk, I will demonstrate how human-centered computing can leverage the generalizability of theoretical frameworks to design and build computational systems for Post-Traumatic Stress Disorder (PTSD) and Diabetes. I will discuss unique challenges in each clinical domain and will present theory-driven technology interventions that address them. I will also explore how these interventions can lead to improved health and wellness in diverse populations.
  • Item
    Visualization of Exception Handling Constructs to Support Program Understanding
    (Georgia Institute of Technology, 2009) Shah, Hina ; Görg, Carsten ; Harrold, Mary Jean
    This paper presents a new visualization technique for supporting the understanding of exception-handling constructs in Java programs. To understand the requirements for such a visualization, we surveyed a group of software developers, and used the results of that survey to guide the creation of the visualizations. The technique presents the exception-handling information using three views: the quantitative view, the flow view, and the contextual view. The quantitative view provides a high-level view that shows the throw-catch interactions in the program, along with relative numbers of these interactions, at the package level, the class level, and the method level. The flow view shows the type-throw-catch interactions, illustrating information such as which exception types reach particular throw statements, which catch statements handle particular throw statements, and which throw statements are not caught in the program. The contextual view shows, for particular type-throw-catch interactions, the packages, classes, and methods that contribute to that exception-handling construct. We implemented our technique in an Eclipse plugin called EnHanCe and conducted a usability and utility study with participants in industry.