The eye ist equipped, different to a camera/digitale image, with approx. 120 million rods (grayscale vision) and approx. 7 million cones (colour vision) per eye. The rods are responsibly for detail perception. Furthermore, the 'signal post-processing' generates a bandpass function via the so called lateral inhibition of the receptive field. Therefore, humans are able to detect edge very good, even in the night. Cones are also called LMS-receptors because they detect long, medium, and short wavelengths in the visible spectrum (red, green, blue). As we are equipped with 'only 7 million' receptors, the colour vision is low-res. The 'signal processing' is handles in the lateral leniculate nucleus ( LGN) and visual cortex (VC).
I agree with the above comments raised. Additionally I think that there are subjective evaluation metrics that can be used to compare the objective evaluation of the eye. For example if you want to asses the quality of a processed image then you may use subjective measures which can compute the outcome of the processed image by comparing it with the original. You may then ask observers to evaluate objectively the image and then compare their evaluations with the results obtained using your quality metric/s. These issues are treated in some of our publications which may be found in http://www.medinfo.cs.ucy.ac.cy/. We have also some accompanied software to download with which subjective evaluation of the original and processed images can be made. (see for example)
C. P.Loizou et al., Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery”, Computer Methods and Programs in Biomedicine, no. 114, pp. 109-124, January 2014.
Despeckle Filtering Algorithms and Software for Ultrasound Imaging", C.P. Loizou, and C.S. Pattichis, Morgan & Claypool Publishers, CA, USA, 2008.
The most straightforward way of doing this is to measure the number of rodes ad cones as photo detectors in CCD (or CMOS) camera sensors.
However, human eye is far more complex than that. First you have to take in account that in a camera sensor there is only a kind of photo detectors, the size of each one is the same, and the density is also the same in the whole sensor.
Neither of these things are true in the eye, where different receptors with different densities in different parts of the eye and with different perception areas are combined.
But the most important question is the way these signals are processed to create the image.
In the eye, output of cones and rode cells is processed in a complex hierarchy, activating specific processes when light activates specific patterns in a concrete way, detecting orientations, edges, directional movements, and integrating information of both eyes in different ways. Also information is integrated with information from different regions of the nervous system, such as the one which measure position and orientation of the body, and other sensorial information. So images as we perceive them are the result of complicated integration and analysis processes, not very similar to the thing that we know as a Digital Image.