The navigation stack needs to know the position of the sensors, wheels, and joints.
To do that, we use the Transform Frames (tf) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.
tf, we can add more sensors and parts to the robot, and
tf will handle all the relations for us.
If we put the laser 10 cm backwards and 20 cm above with reference to the origin of the
base_link coordinates, we would need to add a new frame to the transformation tree with these offsets.
Once inserted and created, we could easily know the position of the laser with reference to the
base_link value or the wheels. The only thing ...