Vyacheslav - we often take images and video from UAV and AUVs at the early stages of computer vision design to work with as test images - what we most often see are issues with lighting, reflections, turbidity in water for AUVs, weather and air quality issues in the operating environment that may or may not show up in test images, but taking in real operating environment is much better than lab test images. The cameras can be well characterized in the lab, but characterizing the operating environment allows for more intelligent filtering, enhancement, threshold selection and selection of the algorithms used for segmentation and recognition. Further, it may show that high resolution images are not really needed (e.g. 1080p or higher) and perhaps low resolution 480p with high frame rates is better, can help with frame rate requirements, and optics selection. Just our experience - hope that helps.
if you are interested in values of luminance, that is pixel channel intensity has to be meaningful of something in your application, consider that images are not calibrated
and they are not a faithful reproduction of the real luminance (or radiance) of the scene.
Moreover if the dynamic range increase the optical noise increase as well.
And multiple exposure do not solve the problem, on the contrary the add more noise.
generally, I use test images with known or simulated values (lighting etc) to formulate mathematical models to analyse real images. So basically test images are the first steps in experimentation and testing of new theories and analysis that could potentially be applied to real images. For example in the case of improving visibility in underwater images, you can add a known set of attenuations to a couple of natural images (i.e. your test images), and then examine how efficient different enhancement algorithms perform on them, before choosing one to apply on the actual underwater images (i.e. real images).......hope this is of some help
good question Vyacheslav. Most of the time the model continuously changes between iterations of testing between the real and test images until a particular model is perfected. So you may solve one issue on the test images with a particular model, but its interaction with another element in the real images may cause a different result. You then have to return to your test images and simulate the new scenario and run over the various tests to refine your existing model, then examine if it will work on the real images.........is this of any help?