Say we have microscope images obtained at various known magnifications. And I'd like the model to be able to tell that a big feature in a high magnification image is the same as a small feature in a low magnification image. In addition, after training using 50X and 100X magnification images, I'd like to model to also produce good predictions with a 70X magnification image. Intuitively, I'd think that feeding the model the concept of 'pixel size' is the straightforward method. Is there anyway to realise this? Or is it a readily available parameter in image related machine learning models? Or perhaps there is a better way to achieve the task?

More Xiaobing Feng's questions See All
Similar questions and discussions