The entropy or average information of an image can be determined approximately from the histogram of the image. The histogram shows the different grey level probabilities in the image. The entropy is useful, for example, for automatic image focusing: as the state of focusing of an image varies, so does its entropy. We present a method for measuring the entropy quite quickly and with reasonable accuracy. Our method is fast for two reasons: we have derived empirically a simple approximation formula for the entropy of images, and we were able to utilize an existing TV optical nonlinear component which performed the analogue calculation of our formula at TV speed. Our method can be applied to real-time focusing of two- or three-dimensional images in a TV system, for example in microscopy.
I am attaching a research paper, hopefully will help
The information entropy being an excellent description of unpredictability of an image encryption algorithm is computed via:
Where refer to the possibility of the occurrence of symbol , and is the grey value of the image ranging from (0-255). For a cipher image, the entropy value should ideally be = 8. Therefore, any efficient image encryption algorithm should produce a cipher image with the entropy nearly to 8 (Belazi, Abd El-Latif, Rhouma and Belghith, 2015 and Mannai, Bechikh, Hermassi, Rhouma and Belghith, 2015).
No. The entropy of an image does not depend on the image size but on the variability of pixel values.
two binary sources S1={0,1,1} and S2={0,0,1,1,1,1} of information with the same probability distribution [P(1)=2/3, P(0)=1/3], have the same entropy though differently long or big.
However, the complexity of your image or data source depends on both its length and entropy: this is called the Kolmogorov complexity
As is known, given a discrete random variable whose distribution of probabilities is known, entropy is defined for the distribution of probabilities and is independent of the values of the variable, therefore in an image more than one entropy can be defined, depending on the probability distribution that is taken into account.
Frequently a probability distribution is defined on the pixels of the image and that distribution is calculated its entropy, for example for the segmentation of images, as you can see in: (Segmentation of Images Based on Pixel Entropy) https: // knepublishing.com/index.php/KnE-Engineering/article/view/1456/3516 .
On the other hand, if an image is calculated its Voronoi discretization and its associated Delaunay triangulation, (Spatial tessellations: Concepts and applications of Voronoi diagrams) http://library1.org/_ads/6AA6C35A5537282954D6A74C6D60B770 which is a powerful tool of computational geometry for the investigation of spatial patterns, then there are diverse characteristics of interest in the polygons and triangles obtained, such as number of edges, length of the edges of the polygons and area, Perimeters, maximum angle ,, minimum and average of Delaunay's triangles. These characteristics can be associated with a distribution of probabilities in the obtained discretization and each entropy can be calculated for each distribution.
The interpretation of the value of the entropy depends on the chosen characteristic. For example see: (Characterization of Self-Assembled 2D Patterns with Voronoi Entropy) http://www.leonid-dombrovsky.com/New%20Page%202.files/Entropy-2018(Voronoi%20entropy_Review).pdf
Theoretically the entropy of the pixels of an image does not depend on the size of the image, it depends on the level of randomness of those pixels. In practice, the estimation of entropy is an active research problem, there is no unbiased estimator, there are numerous estimators, the criteria for choosing an estimator are not accurate and the distribution of all of them and the value of the estimator of Entropy does depend on the size of the image.