Series
GVU Brown Bag Seminars

Series Type
Event Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 8 of 8
  • Item
    Democratizing Robot Learning and Teaming
    (Georgia Institute of Technology, 2023-09-14) Gombolay, Matthew
    New advances in robotics and autonomy offer a promise of revitalizing final assembly manufacturing, assisting in personalized at-home healthcare, and even scaling the power of earth-bound scientists for robotic space exploration. Yet, in real-world applications, autonomy is often run in the O-F-F mode because researchers fail to understand the human in human-in-the-loop systems. In this talk, I will share exciting research we are conducting at the nexus of human factors engineering and cognitive robotics to inform the design of human-robot interaction. In my talk, I will focus on our recent work on 1) enabling machines to learn skills from and model heterogeneous, suboptimal human decision-makers, 2) “white-box” that knowledge through explainable Artificial Intelligence (XAI) techniques, and 3) scale to coordinated control of stochastic human-robot teams. The goal of this research is to inform the design of autonomous teammates so that users want to turn – and benefit from turning – to the O-N mode.
  • Item
    Shape Machine: From software to practice
    (Georgia Institute of Technology, 2023-09-07) Economou, Athanassios
    What would it mean if we could select any part (shape) of a CAD model and use it to find (⌘F) all its geometrical instances in the model (or other CAD models for that matter) – same size, larger, smaller, rotated, reflected or transformed in some way? What would it mean if we could edit this part and use it to replace (⌘R) all its geometrical instances in the model? Why is that the Find and Replace (⌘F/⌘R) operations that are so essential in Word or Excel have yet to be implemented in CAD? And what would happen if we could seamlessly use these shape-based Find and Replace (⌘F/⌘R) operations in a logical processing framework using states, loops, jumps and conditionals to literally write programming code by drawing shapes? How would this affect our current view of computation and what would it mean for design? The talk discusses the current state of the Shape Machine, a shape-rewrite computational system that features shape-based Find and Replace (⌘F/⌘R) operations for lines and arcs in 2D vector graphics and a logical processing framework including familiar control flow constructs (looping and branching), to allow write programming code by drawing shapes. Shape Machine is developed at the Shape Computation Lab at the Georgia Institute of Technology and currently is integrated within Rhinoceros, a NURBS 2D/3D CAD software. Several applications drawn from architectural design, industrial design, game design, circuit design, mathematics and other fields showcase the potential impact of this new technology in various domains.
  • Item
    Considering People and Technology
    (Georgia Institute of Technology, 2023-08-24) Best, Michael L.
  • Item
    Computational Models of Human-Like Skill and Concept Formation
    (Georgia Institute of Technology, 2023-04-13) MacLellan, Christopher J.
    The AI community has made significant strides in developing artificial systems with human-level proficiency across various tasks. However, the learning processes in most systems differ vastly from human learning, often being substantially less efficient and flexible. For instance, training large language models demands massive amounts of data and power, and updating them with new information remains challenging. In contrast, humans employ highly efficient incremental learning processes to continually update their knowledge, enabling them to acquire new knowledge with minimal examples and without overwriting prior learning. In this talk, I will discuss some of the key learning capabilities humans exhibit and present three research vignettes from my lab that explore the development of computational systems with these capabilities. The first two vignettes explore computational models of skill learning from worked examples, correctness feedback, and verbal instruction. The third vignette investigates computational models of concept formation from natural language corpora. In conclusion, I will discuss future research directions and a broader vision for how cognitive science and cognitive systems research can lead to new AI advancements.
  • Item
    Deep Thinking about Deepfake Videos: Understanding and Bolstering Humans’ Ability to Detect Deepfakes
    (Georgia Institute of Technology, 2023-03-16) Tidler, Zachary
    “Deepfakes” are videos in which the (usually human) subject of a video has been digitally altered to appear to do or say something that they never actually did or said. Sometimes these manipulations produce innocuous novelties (e.g., testing what it would look like if Will Smith had been cast as “Neo” in the film The Matrix), but far more dangerous use cases have been observed (e.g., producing fake footage of Ukrainian President, Volodymyr Zelenskyy, in which he instructs Ukrainian military forces to surrender on the battlefield). Generating the knowledge and tools necessary to defend against potential harms these videos could impose is likely to rely on contributions from a broad coalition of disciplines, many of which are represented in the GVU. In this week’s Brown Bag presentation, we will offer some real-time demonstrations of deepfake technology and present findings from our work that has largely focused on investigating the psychological factors influencing deepfake detection.
  • Item
    Patriarchy and Health: Designing Technologies for Men to Improve Women’s Health in Pakistan
    (Georgia Institute of Technology, 2023-03-09) Naseem, Mustafa
    This talk will address the design challenges and opportunities in creating health technologies for men to improve the health of women in religiously conservative, patriarchal, and low-income societies. In this talk, I will share findings from the deployment of a speech-based service called Super Abbu (Super Dad) designed to connect expectant fathers to doctors and to each other. Over a period of 71 days, the service reached upwards of 20,000 users who spent almost 400 thousand minutes on the platform. Through a critical examination of cultural and societal factors, such as traditional gender roles, stigma towards sexual health information-seeking, and limited access to resources, I will highlight key considerations for designing effective and culturally sensitive health technologies for this population. The goal of this talk is to provide insights and recommendations for designers, researchers, and practitioners to create health technologies that are inclusive, accessible, and effective for users, regardless of their cultural, social, and economic backgrounds.
  • Item
    Multi-Party Human-Robot Interaction: Towards Robots with Increased Social Context Awareness
    (Georgia Institute of Technology, 2023-02-09) Vázquez, Marynel
    Many real-world applications require that robots handle the complexity of multi-party social encounters, e.g., delivery robots may need to navigate through crowds, robots in manufacturing settings may need to coordinate their actions with those of human coworkers, and robots in educational environments may help multiple people practice and improve their skills. How can we enable robots to effectively take part in these social interactions? At first glance, multi-party interactions may be seen as a trivial generalization of one-on-one human-robot interactions, suggesting no special consideration. Unfortunately, this approach is limited in practice because it ignores higher-order effects, like group factors, that often drive human behavior in multi-party Human-Robot Interaction (HRI). In this talk, I will describe two research directions that we believe are important to advance multi-party HRI. The first direction focuses on understanding group dynamics and social group phenomena from an experimental perspective. The other one focuses on leveraging graph state abstractions and structured data-driven methods for reasoning about social contexts, which include individual, interpersonal and group-level factors relevant to human-robot interactions. As part of this talk, I will also describe our recent efforts to scale HRI data collection for early system development and testing via online interactive surveys. We have begun to explore this idea in the context of social robot navigation but, thanks to advances in game development engines, it could be easily applied to other HRI application domains.
  • Item
    Raise Your Hand, An Electrical Engineer’s First Effort at Interactive Multimedia Digital Art
    (Georgia Institute of Technology, 2023-01-26) Weitnauer, Mary Ann
    The Raise Your Hand exhibit was developed almost exclusively by undergraduates and shown in the Ferst Center for the Arts lobby for two weeks in November 2022. Cameras and computer vision software sensed the participant's pose. Gestures including raising one's hand, facial expression, and torso tilt caused effects simultaneously in the video, music, and mechatronics, in each of three concatenated sections. Focus groups and online surveys collected participants' reactions to the exhibit; the results were discussed in the talk.