2Task Perception

For capturing the relevant motions during the demonstrations, two common perception interfaces are presented in the book: optical tracking systems and vision cameras. The acquired data from optical trackers are temporal sequences of positions and orientations of the scene objects. Vision cameras acquire sequence of images of the scene, thus the task representation requires reduction of the dimensionality of the acquired data via extraction of relevant image features. Trajectories of the relevant objects during the demonstrations are commonly calculated by employing a subset of the extracted image features.

2.1 Optical Tracking Systems

An optical tracking system, Optotrak Certus® from NDI (Waterloo, Canada) (Optotrak Certus, 2012), is shown in Figure 2.1a. The optical tracking system employs a set of infrared optical markers for capturing the demonstrated motions (Figure 2.1b). The markers are attached onto strategic locations of the target objects, tools, or the demonstrator’s body. Based on the markers’ locations with respect to a fixed reference frame, poses of predefined rigid bodies are inferred over a set of discrete time instants. For the considered tasks here, the Optotrak system is set to acquire the positions of the optical markers at predefined time periods of 10 milliseconds. According to the manufacturer’s data sheets, the accuracy of the Optotrak system, at a distance of 2 meters from the position sensors, is characterized by the root‐mean‐square ...

Get Robot Learning by Visual Observation now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.