
392 Coverbal Synchrony in Human-Machine Interaction
Kaiser et al. (2003) applied unification over typed feature structures
to analyze multimodal input consisting of speech, 3D gestures and
head direction in augmented and virtual reality. Noteworthy is the
fact that the system went beyond gestures referring to objects, but
also considered gestures describing how actions should be performed.
Among others, the system was able to interpret multimodal rotation
commands, such as “Turn the table <rotation gesture> clockwise.”
where the gesture specified both the object to be manipulated and
the direction of rotation.
Another popular approach that was ...