Learning OpenCV: Computer Vision with the OpenCV Library
This is a good book about computer vision. You can find how to implement not only color histogram computation and comparison but also other computer vision algorithms, by using OpenCV and C/C++. It also provide the principle of color histogram computation and comparison.
You can do this easily in matlab. Read a color image in matlab and then run the following code. The color image has three different channels (red, green, blue). You can then plot each channel separately as follows:
As for histogram comparison, usual choices include L1 distance, histogram intersection, chi-square, hellinger distance, etc. For building the histogram, both OpenCV and Matlab are excellent tools. If you have to use C or C++, OpenCV is a better choice. Otherwise, Matlab is more efficient for coding.
if you want to extract a color histogram from an RGB-Image you should perform first a conversion to the HSV color space ( http://en.wikipedia.org/wiki/HSL_and_HSV ) to isolate color information in one channel (H). In addition, you might want to weigh the contribution of a pixel to your color histogram by its saturation (S channel) and its value (V channel) as pixels with either low saturation or low value will have more or less random color (hue) values.
To compare color histograms you could use a simple norm like L1, but a cross-bin distance like the earth movers distance (EMD) will probably work much better. A nice paper in this regard is:
Approximate earth mover's distance in linear time.
Sameer Shirdhonkar, David W. Jacobs
In proceeding of: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), 24-26 June 2008, Anchorage, Alaska, USA
in general you cannot determine the exact wavelength that caused a color in an image - independent of the format. You cannot even determine if it was just one specific wavelength or if it was a combination of wavelengths. The latter may be the case in most situations.
The situation is a bit better, if you know the exact type of sensor your camera uses. Let's say you would use a ICX424AL from Sony. In the corresponding datasheet you will find this diagramm:
It describes the sensitivity of the sensor to specific wavelengths of light. But if you get a high pixel value, you can't tell if it was, e.g., a weak signal at 500nm or a very strong signal at 900nm. Both signals could lead to the same pixel value.
The situation gets still better, if you place color filters in front of the sensor. These color filters act as a bandpass-filter for a specific subrange of wavelengths. But the typical RGB-Filter used in a Bayer-Pattern have still a pretty wide range wavelengths that they cover. If you know the exact filter properties of your color filters in addition to the sensitivity of your sensor, you may come close to recovering the original wavelengths of the signal.
If you have a specific application that needs information about a certain, well defined wavelength, you may get color filters with a very narrow band that just pass the wavelength you want to detect. Here is a nice example of such a filter: