I have performed some experiments regarding this issue in the ImageNet dataset. By using the original image size (256x256 or 512x512) the models achieve a good accuracy. However, by using the ImageNet with images of 32x32 the accuracy is very bad. Therefore, the image size can impact on the final accuracy.
It is important to mention that, in these experiments, I have trained a model from scratch for each ImageNet version.
I think the low-resolution images present more noise (i.e., due to rescaling issues) and they are poor in details that would aid to discriminate the classes.
Well, I think that it all depends on the task you want to accomplish. Recognition can be achieved with different kind of local descriptors such as SIFT (Scale Invariant feature transform) or SURF (Speeded-Up Robut Features) showing quite robust features with respect to scale and rotation. You can also use supervised approaches such as classic Machine learning methods or the most recent Deep Learning approach, here all depends on the way you train your system (which kind of images and the resolution ...).
I agree with the answers of previous 2 members answers for your question. Low resolution images give only approximation of the image contents. If the useful information of recognition lies in high frequency gray levels, so use of low resolution images for classification will not help.
I agree with the answers of previous 2 members answers for your question. Low resolution images give only approximation of the image contents. If the useful information of recognition lies in high frequency gray levels, then use of low resolution images for classification will not help. So size of image has definite impact on classification