Given two images of the same scene, I need to know the photometric transformations between these two images. These images are taken by cameras embedded on a robot.
This is not a straight forward to obtain accurate results.
The first task you need to establish a proper image geometrical calibration model which relates the cameras location and orientation in space with their optical characteristics and resulting visibility of the scene. If such calibration is establish triangulation approach can be used to establish valid photometric relationship among image points (pixels) of both images.
The proper and accurate calibration is the real challenge in such problem and worth investigation.
Hi, I remember that I added my answer yesterday to this question, however I cannot see it published, therefore, I am going to put the answer again.
In order to get accurate results the first task is the need to establish a proper calibration by considering the cameras location and orientation in space, cameras optics and field of view, and image scene. Once such calibration is established the problem of photometric transformation is reduced and simplified that can be solved via trigonal geometry (trigonalometrical). This will help to extract valid 3D information from the image scenes.
I have the same problem for displaying answers. But now, the problem is resolved. Thank you. I am trying to understand your answer. I will get back to you for other questions.