6Task Execution
This chapter focuses on endowing robustness in the task execution step using vision‐based robot control. A framework that performs all the steps of the learning process in the image space of a vision sensor is presented. The steps of task perception, task planning, and task execution are formulated with the goal of improved overall system performance and enhanced robustness to modeling and measurement errors. The constraints related to vision‐based controller are incorporated into the trajectory learning process, to guarantee feasibility of the generated plan for task execution. Hence, the task planning is solved as a constrained optimization problem, with an objective to optimize a set of trajectories of scene features in the image space of a vision camera.
6.1 Background and Related Work
The developed approach for robot programming by demonstration (PbD) is based on the following postulations: (i) perception of the demonstrations is performed with vision cameras; and (ii) execution of the learned strategies is conducted using visual feedback from the scene.
The most often used perception sensors in the PbD systems are the electromagnetic (Dillmann, 2004), inertial (Calinon, 2009) and optical marker‐based sensors (Vakanski et al., 2010). However, attaching sensors on workpieces, tools, or other objects for demonstration purposes is impractical, tiresome, and for some tasks, impossible. This work concentrates on using vision sensors for perception of demonstrated ...
Get Robot Learning by Visual Observation now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.