Organizational Unit:
George W. Woodruff School of Mechanical Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 10 of 21
  • Item
    Interactive Multi-Modal Robot Programming
    (Georgia Institute of Technology, 2005) Paredis, Christiaan J. J. ; Khosla, Pradeep K. ; Iba, Soshi
    As robots enter the human environment and come in contact with inexperienced users, they need to be able to interact with users in a multi-modal fashion—keyboard and mouse are no longer acceptable as the only input modalities. This paper introduces a novel approach for programming robots interactively through a multi-modal interface. The key characteristic of this approach is that the user can provide feedback interactively at any time—during both the programming and the execution phase. The framework takes a three-step approach to the problem: multi-modal recognition, intention interpretation, and prioritized task execution. The multi-modal recognition module translates hand gestures and spontaneous speech into a structured symbolic data stream without abstracting away the user's intent. The intention interpretation module selects the appropriate primitives to generate a task based on the user's input, the system's current state, and robot sensor data. Finally, the prioritized task execution module selects and executes skill primitives based on the system's current state, sensor inputs, and prior tasks. The framework is demonstrated by interactively controlling and programming a vacuum-cleaning robot. The demonstrations are used to exemplify the interactive programming and the plan recognition aspect of the research.
  • Item
    Intention Aware Interactive Multi-Modal Robot Programming
    (Georgia Institute of Technology, 2003-10) Paredis, Christiaan J. J. ; Khosla, Pradeep K. ; Iba, Soshi
    As robots enter the human environment, there are increasing needs for novice users to be able to program robots with ease. A successful robot programming system should be intuitive, interactive, and intention aware. Intuitiveness refers to the use of intuitive user interfaces such as speech and hand gestures. Interactivity refers to the system's ability to let the user interact preemptively with the robot to take its control at any given time. Intention awareness refers to the system's ability to recognize and adapt to user intent. This paper focuses on the intention awareness problem for interactive multi-modal robot programming system. In our framework, user intent takes on the form of a robot program, which in our context is a sequential set of commands with parameters. To solve the intention recognition and adaptation problem, the system converts robot programs into a set of Markov chains. The system can then deduce the most likely program the user intends to execute based on a given observation sequence. It then adapts this program based on additional interaction. The system is implemented on a mobile vacuum cleaning robot with a user who is wearing sensor gloves, inductive position sensors, and a microphone.
  • Item
    Interactive Multi-Modal Robot Programming
    (Georgia Institute of Technology, 2002-05) Iba, Soshi ; Khosla, Pradeep K. ; Paredis, Christiaan J. J.
    As robots enter the human environment and come in contact with inexperienced users, they need to be able to interact with users in a multi-modal fashion—keyboard and mouse are no longer acceptable as the only input modalities. This paper introduces a novel approach to program a robot interactively through a multi-modal interface. The key characteristic of this approach is that the user can provide feedback interactively at any time—during both the programming and the execution phase. The framework takes a three-step approach to the problem: multi-modal recognition, intention interpretation, and prioritized task execution. The multi-modal recognition module translates hand gestures and spontaneous speech into a structured symbolic data stream without abstracting away the user's intent. The intention interpretation module selects the appropriate primitives to generate a task based on the user's input, the system's current state, and robot sensor data. Finally, the prioritized task execution module selects and executes skill primitives based on the system’s current state, sensor inputs, and prior tasks. The framework is demonstrated by interactively controlling and programming a vacuum-cleaning robot.
  • Item
    Behavioral Model Composition in Simulation-Based Design
    (Georgia Institute of Technology, 2002-04) Sinha, Rajarishi ; Paredis, Christiaan J. J. ; Khosla, Pradeep K.
    We present a simulation and design framework for simultaneously designing and modeling electromechanical systems. By instantiating component objects and connecting them to each other via ports, a designer can configure complex systems. This configuration information is then used to automatically generate a corresponding system-level simulation model. The building block of our framework is the component object. It encapsulates design data and behavioral models and their inter-relationships. Component objects are composed into systems by connecting their ports. However, when converting a system configuration into a corresponding simulation model, the corresponding models for the component objects do not capture the physical phenomena at the component interfaces: the interactions. To obtain an accurate composition, the interaction dynamics must also be captured in behavioral models. In this paper, we introduce the concept of an interaction model that captures the dynamics of the interaction. When two ports are connected, there is an intended interaction between the two components. For composition of component objects to work, an interaction model must be introduced between each pair of connected behavioral models. We illustrate these ideas using an example.
  • Item
    Millibots: The Development of a Framework and Algorithms for a Distributed Heterogeneous Robot Team
    (Georgia Institute of Technology, 2002) Paredis, Christiaan J. J. ; Khosla, Pradeep K. ; Grabowski, Robert ; Navarro-Serment, Luis E.
  • Item
    Composable Models for Simulation-Based Design
    (Georgia Institute of Technology, 2001) Paredis, Christiaan J. J. ; Diaz-Calderon, Antonio ; Sinha, Rajarishi ; Khosla, Pradeep K.
    This article introduces the concept of combining both form (CAD models) and behavior (simulation models) of mechatronic system components into component objects. By connecting these component objects to each other through their ports, designers can create both a system level design description and a virtual prototype of the system. This virtual prototype, in turn, can provide immediate feedback about design decisions by evaluating whether the functional requirements are met in simulation. To achieve the composition of behavioral models, we introduce a port-based modeling paradigm. The port-based models are reconfigurable, so that the same physical component can be simulated at multiple levels of detail without having to modify the system-level model description. This allows the virtual prototype to evolve during the design process and to achieve the accuracy required for the simulation experiments at each design stage. To maintain the consistency between the form and behavior of component objects, we introduce parametric relations between these two descriptions. In addition, we develop algorithms that determine the type and parameter values of the lower pair interaction models; these models depend on the form of both components that are interacting. This article presents the initial results of our approach. The discussion is limited to high-level system models consisting of components and lumped component interactions described by differential algebraic equations. Expanding these concepts to finite element models and distributed interactions is left for future research. Our composable simulation and design environment has been implemented as a distributed system in Java and C++, enabling multiple users to collaborate on the design of a single system. Our current implementation has been applied to a variety of systems ranging from consumer electronics to electrical train systems. We illustrate its functionality and use with a design scenario.
  • Item
    Integration of Mechanical CAD and Behavioral Modeling
    (Georgia Institute of Technology, 2000-10) Sinha, Rajarishi ; Paredis, Christiaan J. J. ; Khosla, Pradeep K.
    This article introduces the concept of combining both form (CAD models) and behavior (simulation models) of mechatronic system components into component objects. By composing these component objects, designers automatically create a virtual prototype of the system they are designing. This virtual prototype, in turn, can provide immediate feedback about design decisions by evaluating whether the functional requirements are met in simulation. To achieve the composition of behavioral models, we introduce a port-based modeling paradigm where systems consist of component objects and interactions between component objects. To maintain the consistency between the form and behavior of component objects, we introduce parametric relations between these two descriptions. In addition, we develop algorithms that determine the type and parameter values of the interaction models; these models depend on the form of both components that are interacting. The composable simulation environment has been implemented as a distributed system in Java and C++, enabling multiple users to collaborate on the design of a single system.
  • Item
    Heterogeneous Teams of Modular Robots for Mapping and Exploration
    (Georgia Institute of Technology, 2000) Grabowski, Robert ; Navarro-Serment, Luis E. ; Paredis, Christiaan J. J. ; Khosla, Pradeep K.
    In this article, we present the design of a team of heterogeneous, centimeter-scale robots that collaborate to map and explore unknown environments. The robots, called Millibots, are configured from modular components that include sonar and IR sensors, camera, communication, computation, and mobility modules. Robots with different configurations use their special capabilities collaboratively to accomplish a given task. For mapping and exploration with multiple robots, it is critical to know the relative positions of each robot with respect to the others. We have developed a novel localization system that uses sonar-based distance measurements to determine the positions of all the robots in the group. With their positions known, we use an occupancy grid Bayesian mapping algorithm to combine the sensor data from multiple robots with different sensing modalities. Finally, we present the results of several mapping experiments conducted by a user-guided team of five robots operating in a room containing multiple obstacles.
  • Item
    An Architecture for Gesture-Based Control of Mobile Robots
    (Georgia Institute of Technology, 1999-10) Iba, Soshi ; Vande Weghe, J. Michael ; Paredis, Christiaan J. J. ; Khosla, Pradeep K.
    Gestures provide a rich and intuitive form of interaction for controlling robots. This paper presents an approach for controlling a mobile robot with hand gestures. The system uses Hidden Markov Models (HMMs) to spot and recognize gestures captured with a data glove. To spot gestures from a sequence of hand positions that may include non-gestures, we have introduced a "wait state" in the HMM. The system is currently capable of spotting six gestures reliably. These gestures are mapped to robot commands under two different modes of operation: local and global control. In the local control module, the gestures are interpreted in the robot's local frame of reference, allowing the user to accelerate, decelerate, and turn. In the global control module, the gestures are interpreted in the world frame, allowing the robot to move to the location at which the user is pointing.
  • Item
    RAVE: A Real and Virtual Environment for Multiple Mobile Robot Systems
    (Georgia Institute of Technology, 1999-10) Dixon, Kevin ; Dolan, John ; Huang, Wesley ; Paredis, Christiaan J. J. ; Khosla, Pradeep K.
    To focus on the research issues surrounding collaborative behavior in multiple mobile-robotic systems, a great amount of low-level infrastructure is required. To facilitate our on-going research into multi-robot systems, we have developed RAVE, a software framework that provides a Real And Virtual Environment for running and managing multiple heterogeneous mobile-robot systems. This framework simplifies the implementation and development of collaborative robotic systems by providing the following capabilities: the ability to run systems off-line in simulation, user-interfaces for observing and commanding simulated and real robots, transparent transference of simulated robot programs to real robots, the ability to have simulated robots interact with real robots, and the ability to place virtual sensors on real robots to augment or experiment with their performance.