I only partially agree with the previous answers. A color is defined by one specific wave length or a mixture of these wave lengths. There is a spectrum of colors that only consist of a single wave length, like a specific kind of Red, Blue, Yellow, Green, etc. Mixing these wave lengths at a specific ratio we get colors like White, Brown, and Pink. This is the physical view on colors.
However, our eyes have receptors for 3 different colors/wave lengths. The combination of the neural signal strength on these three receptors are interpreted by our brain as a specific color. Simply speaking our cone cells can receive Blue, Green, and Red light. If you see something that is pure Yellow, the Red and Green cones are stimulated. If you now stimulate the cones with the right combination of a pure Red and pure Green wave length your brain will also perceive this as Yellow. Your eye cannot tell the difference.
Based on this assumption how the eye works the previous answers are correct. This is why we use RGB for additive colors and CMYK for subtractive colors. This is just the natural result of how we perceive physics (through our eyes). The alternative would be to describe each color (when we want to store it) as a combination of an infinite number of wave length. This is not feasible at all. Why use an infinite number of values when 3 numbers in RGB will do the trick for our eyes?
Color is the most important feature of the colored image, and when you deal with colored image to extract some information you must split the colored image according to its type of representation (e.g. RGB or HSV etc.,,) to process the pixel intensity values.
There are many different color systems and spaces (e.g. RGB, CMYK, YIQ, HSV, HSL and even more complex ones). Color cannot be described as a single float value, as opposed to intensity, which is just a single number. The multi-dimensional color (+maybe transparency) information is usually represented as a vector which is attached to each pixel (or voxel). There are other image modalities where further or different high-dimensional data is attached, e.g. diffusion weighted MR images where you have a set of directions and intensities at each voxel.
Color need to be represented by a vector of several components as it cannot be fully described by a single value. There are many ways of representing color, all of them through several components or channels. In the case of RGB, these channels are "amounts" of red, green and blue because according to the principle of addivitive color mixing, by mixing these three one can obtain any other color. Other alternatives for representation are, for example, HSV or CIE Lab. The former uses a mixing of hue, saturation and value (intensity or brightness), while the latter uses (easily explained) a mixing of an "amount" of red and blue plus a value of brightness. Hope it helps!
Color is one of the prominence and low level feature of an image. It is a 3-D array which consists color information in every dimensions. Each dimensions has its own importance in the image. For example if image is in RGB format then all 3-D have R,G and B information respectively. Splitting an image in its color channels also decreases the time complexity of algorithms. Because after splitting each color channel become 1-D array which is more efficient in comparison with 3-D array. Hope it helps!
If you mean, reading an RGB image (e.g. in PNG format) and outputting 3 separate gray-scale images for the R, G and B channels, or looking at just the R, G or B channel of an RGB image:
There are several use cases for these. Some examples:
1) You may want to analyze an image sensor's (camera), or an image compression algorithm's performance. They are often not the same for R, G and B. Bayer-type camera sensors have a higher resolution for G than for R and B. Lossy compression algorithms often recover luminance at a higher quality than color etc. So, you need to look at the different channels separately.
2) We often compact 2, 3, 4 etc. gray-scale images into an RGB or RGBA image for convenience, even though the original images may not mean RGB information. This is often used in computer graphics, when defining a "shader" for image generation (rendering). For example, the R channel may store a surface displacement map, the G channel some blending coefficient and the B channel some precomputed 2D function not even related to color and the A channel may store a "shininess" value. In this case, looking at the first 3 channels as an RGB image is completely meaningless. It would be more confusing than looking at the channels individually.
Each natural color can be approximated by a combination of the three basic colors. Each basic color of an image stored in a group accidental data called channel. In this way we can observe the tendency of the dominant basic color in an image for a specific analysis needs.
I only partially agree with the previous answers. A color is defined by one specific wave length or a mixture of these wave lengths. There is a spectrum of colors that only consist of a single wave length, like a specific kind of Red, Blue, Yellow, Green, etc. Mixing these wave lengths at a specific ratio we get colors like White, Brown, and Pink. This is the physical view on colors.
However, our eyes have receptors for 3 different colors/wave lengths. The combination of the neural signal strength on these three receptors are interpreted by our brain as a specific color. Simply speaking our cone cells can receive Blue, Green, and Red light. If you see something that is pure Yellow, the Red and Green cones are stimulated. If you now stimulate the cones with the right combination of a pure Red and pure Green wave length your brain will also perceive this as Yellow. Your eye cannot tell the difference.
Based on this assumption how the eye works the previous answers are correct. This is why we use RGB for additive colors and CMYK for subtractive colors. This is just the natural result of how we perceive physics (through our eyes). The alternative would be to describe each color (when we want to store it) as a combination of an infinite number of wave length. This is not feasible at all. Why use an infinite number of values when 3 numbers in RGB will do the trick for our eyes?
I fully agree with Simon's answer. The reason why people use three color channels is solely dependent on the number of independent color receptors our eyes have. After all, the goal of cameras and displays is to record and show something for the **human eye** to perceive.
Fun facts:
- for most people the green receptors are more efficient, such that most cameras actually have more "green" pixels to adequately record the information in the green wavelengths (just google "Bayer pattern" if you are interested in this).
- the reason why most image formats work with 3 * 8bit = 2^24 ~ 16 million different colors is based on experiments that show that humans can roughly distinguish that many colors. Spending more bits on resolution would be somewhat of a waste (note, that this is only a rough guideline, an in-depth look into bit resolution would also need to take dynamic range of the eye into account, etc.)
There are many more interesting things happening in color vision, but perhaps this gives you some idea...
However i think it is important to underline that color is *NOT* a phisical feature of the objects, but it comes from the interaction of three elements:
- the surface properties of the object you are looking at
- the light source that illuminate the scene
- the observer
Varing just one of these actor the color perception will change:
For instance in a totally dark ambient non color exist, moreover two different observer (for instance a human and a dog, ... but also two different humans) can perceive in a different way the color of a object.
It is natural to describe colors in 3 dimensions, no matter which type of color space is used. Human color vision is based on the signals and further internal manipulation of these signals coming from the three types of cone receptors in the retina. These receptors are broad band detectors with peak detection at long, mid and short wavelengths in the visible spectrum. Further stages of color perception lead to 3 distinct color channels, roughly a lightness, a red-green, and a blue-yellow signal, not unlike the Lab color space in e.g. Photoshop.
Different color spaces have have different uses and characteristics, but all boil down to 3 dimensions, and they usually can be converted without information loss from one space to another.
Our color perception is pretty much a mapping of an infinite dimensional space (wavelength spectrum) to just 3 dimensions. This is also the reason for the phenomenon of color metamerism. Different spectra can lead to the same color. Without this metamerism none of our devices like LCD, CRT and other monitors could produce the same color although they all produce different spectral emissions. The same is valid to a certain degree for different types of illuminants, like fluorescent lamps, LEDs, lasers or incandescent bulbs. Despite their strongly varying spectra they can still produce very similar or even equal colors.
Only the interaction with the reflectance or transmission spectra of objects can cause major problems, like the well-known effect of clothes “changing colors” when wearing them under daylight vs. fluorescent “daylight” conditions. This is caused by the interaction between the spectrum of the illuminant and the reflection/transmission spectrum of the illuminated object.
I disagree somewhat with the statement that color information is the most important information for our vision. Actually the lightness/contrast information is much more important for our vision than color information. This is the reason why algorithms like JPEG or video compression work so well. The color channels can be subsampled without much loss of visual detail, as long as the detail is kept in the lightness channel of images or videos.
Yes, we have cones of three types. Another approach still gives three numbers: dominant frequency, overall power and rate of the power at dominant frequency to the overall power. It would be Hue-Brightness-Saturation like system. And these numbers describe a light wave, not perceived color.
Agree with S.Marsi. To determine a color we need three numbers, observer and environment data. Such as a standard and color profile.
Identification of color will lead to some specific applications like classification of objects or degree of deviation from normal value for any parameter is easy through color separation
As several users have already mentioned, the main point is that, as long as there is no further reduction of the dimensionality of color dimensions, the form of the color space that is used is not so important because the conversion operations between color spaces with three dimensions are fully reversible. Different color spaces have different advantages/disadvantages, and different uses, but all have in common that detailed spectral content information is lost.
Reduction of the dimensions of color spaces can be used e.g. for simple color deficiency simulation models. More intricate models that still use three types of cones and involve spectral information can be used for more realistic simulations of phenomena like anomalous color perception, like protanomaly etc. because they are based on the shift of the spectral sensitivity of one the three types of cones.
Color specification based on CMYK or RGB is "device-dependent" color space it means that the same values of R,G,B can give different color stimuli on different monitors, printers. There are some color spaces that are not depend of devices (monitors, printers) - there are for example CIE XYZ, CIE Lab or CIE Luv. There are some methods of color specification that pretend our perception of color -for example CIECAM (color appearance model). They are trying to combine three mentioned by Mr. S. Marsi elements including even level of eye adaptation.
splitting of images into color channels divides the image into 3 parts ( RGB), further for each color component, on bit level slicing can be done. It is really helpful in medical imaging research applications.