Interactive Multi-Modal Robot Programming
Author(s)
Iba, Soshi
Khosla, Pradeep K.
Paredis, Christiaan J. J.
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
As robots enter the human environment and come in contact with inexperienced users, they need to be able to interact with users in a multi-modal fashion—keyboard and mouse are no longer acceptable as the only input
modalities. This paper introduces a novel approach to program a robot interactively through a multi-modal interface. The key characteristic of this approach is that the user can provide feedback interactively at any time—during both the programming and the execution phase. The framework takes a three-step approach to the problem:
multi-modal recognition, intention interpretation, and prioritized task execution. The multi-modal recognition
module translates hand gestures and spontaneous speech into a structured symbolic data stream without abstracting away the user's intent. The intention interpretation module selects the appropriate primitives to generate a task based on the user's input, the system's
current state, and robot sensor data. Finally, the prioritized task execution module selects and executes skill primitives based on the system’s current state, sensor inputs, and prior tasks. The framework is demonstrated by interactively controlling and programming a vacuum-cleaning robot.
Sponsor
Date
2002-05
Extent
Resource Type
Text
Resource Subtype
Paper
Proceedings
Proceedings