Another "dummy" reason: color increases the complexity of the model. One may want to introduce an image processing tool using gray level images, as opposed to color, not because of the "format" of gray level images, but because the inherent complexity of gray level images is lower than that of color images. E.g., one can talk about brightness, contrast, edges, shape, contours, texture, perspective, shadows, and so on, without addressing color. After presenting a gray-level image model/method, in most cases it can be afterwards be extended to color images. That is a more natural path than introducing the complexity of dealing with color images from the begining (unless the proposed method is about color in images, of course).
Assuming that you want to know why grayscale images are used for most image processing tasks (such as segmentation), look at this question on stackoverflow - http://stackoverflow.com/questions/12752168/why-we-should-use-gray-scale-for-image-processing
Maybe it is a dummy reason ;) , but when you want to publish a manuscript in a journal, most of the times you have to pay an extra fee to include color images....
Don't forget that RGB images need to be taken with a detector that can support multi-channels. Images like those taken in SEM/AFM and other non-visible light microscopes are only measuring intensity, not the intensity at different colors, so those devices implicitly need to use grayscale (1-channel) format.
Another "dummy" reason: color increases the complexity of the model. One may want to introduce an image processing tool using gray level images, as opposed to color, not because of the "format" of gray level images, but because the inherent complexity of gray level images is lower than that of color images. E.g., one can talk about brightness, contrast, edges, shape, contours, texture, perspective, shadows, and so on, without addressing color. After presenting a gray-level image model/method, in most cases it can be afterwards be extended to color images. That is a more natural path than introducing the complexity of dealing with color images from the begining (unless the proposed method is about color in images, of course).
I think the previous comment of Javier makes sense more; that, gray scale images are preferred over colored ones to simplify mathematics. It is relatively easier to deal with (in terms of mathematics) a single color channel (shades of white/black) than multiple color channels. For example, assume that you are doing a simple denoising application of a color image. It means you will need to denoise every channel, and this seems to be just a dublicate operation. To simplify matters, you can just convert the image to gray scale and deal with that.
Another issue is that colors in many cases are meant for visual appeal of humans; but, image processing deals also with applications where machines are the main subject, like in computer vision. An application like object detection would barely require information at the edges of an image, which can as well be obtained in gray scale images (also can be obtained in color images, at an expense of complexity).
Of course, image processing tools applied to gray scale images can be generalized to colored images.
A "Gray-level"-, "grayscale"- or "grayvalue"-image is not a format. It is, as we call it, an image representation. A format would be "tiff", "jpg" or "RAW", etc.This has to be commented for clarification.
Now to you question: The answer is quite simple. However, there are several reasons. Any multi-channel image (RGB, YCMK, Multi-spectral, etc) contains N grayscale images, in general.
Historically digital image processing was first applied on - of course - grayscale images because of computational power at that time in the early '80s (I assume here image processing on the first PCs; of course image processing was executed on mainframe computers at NASA, etc. earlier).
In the today's world this is of course no argument anymore. But: In industrial imasge processing usually resource-efficient embedded systems are in use. And therefore, many simple applications are executed on grayscale images.
As a matter of course today we apply colour image processing were necessary!
By the way, there are MANY applications where color information will improve your ability to analyze an image. Imagine you had a fruitbowl image and your goal was to count the number of apples, grapes and bananas in the image. In this case, color information is going to make your life much easier! While color can complicate your analysis, that's mostly true if your analysis is indepdent of the color information in the image. In that case, working directly with a gray image is going to give you an advantage. Since RGB can always be down-converted to 1-channel, it's advantageous to get colored images when advantageous. In some instances, you can't get an RGB image (see my previous comment which was downvoted for some reason ...)
When we are looking for patterns and their properties like the form and shape etc. gray level images are sufficient.
Color images are used to distinguish objects whose shape does not matter, but its appearance such as color like satellite images. In the lattest, RGB and other channels (eg infrared) used well by calculating other indices (NDVI, brightness index, etc.).
Because it is a one layer image from 0-255 whereas the RGB have three different layer image. So that is a reason we prefer grey scale image instead of RGB.