When I(x,y) represents a pixel value, x,y means the location of the pixel in a particular image. If you are working on an object in an image by its position we can use x and y as Cartesian coordinates for its location description. For programming in MATLAB, the pixel locations represents fourth quadrant in X-Y graph. If you are working in n-band satellite image or map based image means with proper formulation of geographical co-ordinates along with pixel locations will certainly provide geographical meaning of pixel location.
@Durai Arun, Firstly thanks for the quick reply. Well i am not working over an object in an image. In your answer you had written "an image by its position we can use x and y as Cartesian coordinates for its location description" how is it possible a pixel value can be same as lat/long geographical coordinates. I again describe my scenario in which i am trying to search is,
1. If there is any 2 dimensional jpeg image and i know the coordinates from where the image has been taken, can i calculate the region of geographical lat/long. Here region is the range upper left to bottom right of an image.
2. How can i calculate the distance of an object from the camera when i have 2 dimensional image.
3. I am not working over n-band satellite image or map based image.
4. If you really find any references regarding to my context please share.
1. let us say you have an geographical image (aerial image) of Bangalore city where the city's centre being the centre of the image and the northern part of the city being covered by upper rows and southern part of the city is covered by lower rows with a resolution 1024*748. We shall import that image as a 2 dimensional array ' I ' and that variable will be having 1024 rows and 748 columns. Let us assign ' m ' as no. of rows and ' n ' as no. of columns. For travelling on each pixel location we shall consider the particular pixel location as ' x ' for row and ' y ' as column. I like to show the difference of pixel and pixel location.
for example let the following array be an image, say A an image of resolution 3*3
A= [ 48, 49, 15;
78, 86, 49;
11, 16, 46;]
The Pixel Value of Pixel Location x=2, y=2, i.e. A(2,2) is 86.
That is how resolution of an image plays the role of locating a pixel. Concluding Pixel Value is different from Pixel Location. In your case, Area representing a Pixel and vertical height from where the image is captured, plays vital roles. Coming to the aerial image of Bangalore city, each pixel on that image has the visual information of particular area of the city and the pixel location corresponds to the range of geographical latitudes and longitudes comprising an area. The higher the optical distance area represented by a pixel in that image the larger area it covers.
In another case, if the optical distance and location are the same, but an increase in resolution say 1920*1080, no. of pixels taken in 1024*748 to represent a location is less than the above resolution, i.e. larger resolution means finer representation. In context i am referring the latitudes and longitudes as the location of pixels in the image.
Just think both as two different 2D array of which one array represents latitudes and longitudes and other array represents the very same information in image resolution's rows and column. All you need is to map one another to do any operation such distance measuring from one point to another or area of a region, for doing that you need more informations such as i have mentioned above.
2. If we keep a tennis ball near a camera and keep a foot ball far way from camera and take a photo, obviously the size of tennis ball is larger than the foot ball, but the picture does not say that. Since we have the prior knowledge of size of both balls, we conclude that tennis ball is near and foot ball is far away. The same applies here, without a prior reference it is hard to calculate the optical distance from where the image have been captured. If not convinced you can look into literatures where they have focused on calculating the optical distance from the size of an object in an image.
Thank you for your well explained reply. But still my point is not clear, since i am working over non-aerial image my question remain the same.
1. Have you heard Robotics vision, how it is converting the 2 dimensional images into more sense-full parameters of real world.
2. As in real world prior information is not possible so it will be difficult to implement.
I still want to be in touch of you. Better we discuss at real time. please mail me your contact number at [email protected] and your timing when you are comfortable.