I am bit confused while programming. I need help for this. I have seen so many programs in matlab converting YCbCr to RGB after taking input. I am now programming using OpenCV. Is it necessary to convert to RGB or we get input as itself RGB.
The answer to your question is simple: The output of a normal electronic camera is RGB. This means that you don't have to transform to any other color representation system (such as your YCbCr ) if your aim is to show the image on a CRT or LCD screen via a normal image viewer program and your expectation is to see on screen colors that are similar to the colors that the photo-scene showed when the image was taken.
The matter becomes more complicated if you aim at increasing the degree of similarity of colors in display versus capture considerably. The same is true if your task is to design an electronic camera with very good color rendition. For such tasks it is indispensible to know more on the physics of colors. The book 'Measuring Colour' by R.W.G. Hunt is a good guide for that.
I better warn you that the specific problems discussed in the literature given above in James' contribution are only very loosely related to your question.
I agree and thank @Ulrich for his very good summary of the situation concerning the output of ordinary digital cameras. Added to that, there are quite a number of cameras that produce high dynamic range images that are known as sRGB or Adobe RGB. For a good overview, see
Since cameras use the optical primary colors, most cameras use RGB systems. YCbCr is used to pigments or printers where the primary colors are Yellow, Cyan, and Magenta.
As I saw, you are using openCV.Some versions use BRG representation as default. If you are getting weird images, probably you hate to convert from RGB to BRG or in the opposite way.
Of course an electronic camera may use a color mosaic filter with Yellow Cyan Magenta filters and compute RGB signals from the 'subtractive' primary signals. Actually this is a tempting idea since you throw away smaller ammounts of light this way. In the company I worked many years ago we had an experimental camera working this way. As was to be expected one gets a bit noisier colors but better gray images this way. Although human color vision works with three spectral channels it is advantageous for color fidality to have color mosaic filters with more primary colors than three. I once read about a camera with two shades of green, but don't know how wide-spread such cameras are.
In my own industrial work I once realized a 10 channel camera which for each pixel gave a good approximation of the spectral light distribution there. An application we all considered surprising was that we captured a very colorful clothed babydoll under medical red light illumination and, by a computed transformation, transformed the reulting dominantly red image to white illuminant. Finally we took a picture under white light and compared the two pictures. It was very difficullt to detect differences!
Last year I read that some company was trying to develop another way to 'filter' the light before it hits the CCD. They were using micro crystals to spread the light in different refractive index and capture each wave length separately. It would increase the amount of light captured in the same area, so the sensors could be still smaller with better image quality.