2 GRASPING
151
~p
art position
Motion L
ReferoncoI~
E" t ima t'i on.came ra j [--- -
of calib.
|
arameters
I
i
space
Planner I ~alibratio ~ ]~ Robot I
xd" va
Y,
FIGURE 5.2
A block diagram showing the planning and control structure.
The planning and control structure employed in this chapter is shown in Figure 5.2. The
sensory data are fed back to a planner that can generate a desired trajectory in real time for
the robot, instead of being fed directly back to the controller as in most previous work in
robotics. We also show in Figure 5.3 how control is implemented by using information from
the vision and force-torque sensor.
2 GRASPING
2.1
Experimental Setup
We consider a manufacturing work cell as shown in Figure 5.4. The work cell is equipped
with a rotating conveyor (equipped with encoders that measure the rotation angle), a robotic
fd
,,
I ~o~.Itor~.[~
~~ I
]Controller] I .~.T ] L( unknown
surface) I
L
+
+ '"
q,q
- - x
I P~176 I ~~onlinea~ |
{Feedback[ ~
Icontro!!er I
J Sensor Based ~Visionl7 ] Unknown ]
ec"
_ .
Planner for
I constrained M~176 ISystem~-- I TraJectory~
FIGURE 5.3
Block diagram of the control system with vision and force-torque sensor.
152 CHAPTER 5 / COMPLEMENTARY SENSOR FUSION IN ROBOTIC MANIPULATION
x
Yc
Zp Zc Zd ~ ~Zr Yr
Y~
X r
d
FIGURE 5.4
A typical manufacturing work cell.
manipulator, and a computer vision system with a single charge-coupled device (CCD)
camera. The precise relative positions of the camera, robot, and conveyor are assumed to be
unknown. In spite of the lack of calibration data, the objective is to compute the
instantaneous position and orientation of a part placed on the turntable with respect to the
coordinate system attached to the base frame of the robot. A second objective is to feed this
information to a motion planner, which operates in either a time base or an event base. The
planner computes the relevant position, velocity, and acceleration profile that the end effector
needs to follow in order to achieve the desired task, which in our experiment is to pick up a
part from the rotating conveyor. The following assumptions are made about or work cell.
A1. The precise position and orientation of the camera with respect to the robot
coordinate frame are unknown. In addition, the precise position and orientation of the
conveyor with respect to the robot coordinate frame are unknown.
A2. The plane of the conveyor and the
XY
plane of the base frame of the robot are
assumed to be parallel.
A3. The part has a known simple shape. In particular, we assume that observing feature
points placed on the top surface of the part enables one to determine the orientation of the
part.
A4. The entire work cell is in the view field of the camera. The center of the conveyor and
a reference point on the conveyor are also assumed to be observed by the camera.
A5. The intrinsic parameters (the focal length etc.) of the camera are known.
The technical contents of this section are now summarized. Because the camera has not
been selectively placed at any specific known position in the work cell, we propose a virtual
rotation algorithm that would virtually rotate the camera in a vertical position with respect
to the disc conveyor. This is summarized in Section 2.2. In Section 2.2 we also describe how
the position and orientation of the part are computed. This is first done assuming that the
height of the part is negligible compared with its dimension. Subsequently, we consider parts
with feature points that are a certain distance above the disc conveyor.

Get Control in Robotics and Automation now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.