I want to make practical experiments for visual SLAM, and I want a simple method for the ground truth other than using Laser Range Finder of another synchronized camera.
if you have access to the environment, place some easy to identify "items" along the pathway of the robot, some extra landmarks (bar-code, signs, numbers),
e.g. place some bar-codes in explicit known locations to get an approximation to "ground truth" values.
If you fear, that your visual SLAM is influenced by the extra landmarks (bar-codes) just erase them and substitute them some random grey patterns before you pass the camera signal to the visual SLAM algorithm.
Perhaps you can detect and filter the bar-codes from the image by printing them using a strange colour that is not apparent in the original scenery (i have to confess, i never tried that).
Thanks a lot for your answer, but I think it is difficult to execute because we do not have the 3D position of the features all the time. We have just estimations.