The digital image is a collection of values of sensors arranged in a grid. These sensors have a field of view in the original world. Then it generates a value based on the phenomena that it measures.
This phenomena can be visible light(for conventional digital images), infrared light(for infrared images), Positron emissions scattering(PET scan), secondary electrons (Scanning electron microscope), and scattered microwave(Synthetic aperture radar SAR).
In conventional image processing, a matrix is used to represent these values. For inverse problems we convert this matrix to a vector and then use the normal equations or regularization based methods.
Some PDE based methods use kernels to process images using 2D convolution. Some times we use the notion of images as point clouds and then construct a graph based on some distance measure. Then we study the Laplacians(Diffusion maps) or the Hamiltonian operators of these graphs. The eigenvalues of these operators lead to the notion of heat kernel signatures and wave kernel signatures.
We have these alternate views of an image based on the application.
What is the most fundamental view of an image which can bring all these different notions together?
Does such a notion exist in literature?