(Georgia Institute of Technology, 2012-10)
Dantam, Neil; Essa, Irfan; Stilman, Mike
We demonstrate the automatic transfer of an
assembly task from human to robot. This work extends efforts
showing the utility of linguistic models in verifiable robot
control policies by now performing real visual analysis of
human demonstrations to automatically extract a policy for the
task. This method tokenizes each human demonstration into a
sequence of object connection symbols, then transforms the set
of sequences from all demonstrations into an automaton, which
represents the task-language for assembling a desired object.
Finally, we combine this assembly automaton with a kinematic
model of a robot arm to reproduce the demonstrated task.