are presented. One is a nonlinear model-based controller and the other is a linearized version
of the nonlinear controller. The nonlinear controller is applied to a direct drive robot for
which nonlinear dynamics cannot be neglected. The linearized controller is used for a general
six-degree-of-freedom geared robot.
The stability of the observer-based control system is presented and the effectiveness of the
observer is verified by experiments on a two-link direct drive robot and a PUMA 560.
Visual information on tasks and environments is essential for robots to execute flexible and
autonomous tasks. A typical task of autonomous manipulation is to track a moving object
with a robot hand based on the information from a vision sensor. To carry out this task, the
vision sensor must be incorporated in the feedback loop. Figure 2.1 shows an example of a
visual feedback task. A camera is mounted on the robot hand and it captures images of the
object. An image is considered as a two-dimensional array of gray-level signals whose size is
typically 512 x 512 pixels. If the gray levels from all pixels are considered as the measurement
signal, the system will not be manageable because the size of the measurement vector is larger
than 200,000 and each element has a nonlinear correlation with its neighbors. Therefore,
preprocessing of the raw image is necessary. Usually, image features of the object are
extracted by preprocessing. A few examples of research based on the stochastic models of
two-dimensional observations are found (e.g. [1]), but most visual servoing schemes use the
features of the image as an observation. An image feature is any structural feature that can
be extracted from an image (e.g., an edge or a corner). Typically, an image feature will
correspond to the projection of a physical feature of the object [27]. The robot is controlled
on the basis of the image features, and further image processing (e.g., image understanding
or recognition) is omitted.
There are two approaches in visual feedback control:
position based
feature based
[46]. With position-based schemes, the object position and orientation relative to the camera
are computed by using photogrammetric [12,47,49], stereo [1, 38, 39], or "depth from
motion" techniques [28, 35]. Because the position of the object is available as the output of
the image processing part, a conventional position controller can be used to control the
manipulator. However, geometric model of the object is required and the camera-robot
Visual tracking.
Robot .......
s ~

Get Control in Robotics and Automation now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.