I know that each pixel of an image can be fed as an i/p to a multi layer feed forward net and trained the same using back propagation, I would like to know the other methods that could be employed.
It's better to extract appropriate feature vector of image and then use them for training an artificial neural network like a MLP network then use the same network for classification or recognizing extracted feature of new images. I don't suggest you to use raw data because of very high dimension and inefficient vector for recognition.
Try the methods in the OpenCV library available from SourceForge.
See 'Learning OpenCV', Gary Bradski & Adrian Kaehler, O'Reilly 2008. ISBN 978-0-596-5163-0
As Mehdi has said you need to extract a feature set for best effect but it is possible to use comparison of histograms of pixel or feature distributions and compare them against reference distributions. The book describes many methods.
It is recommendable to extract specific characteristics of the image for the training of the neural network. You have to specify what you need that the neural network learn of the image. Not all the data must be useful. It depends on the application.
Because you want to recognize the same image as you train, If your images are finite, you can use a hopfield NN. This kind of NN can store and restore very well some patterns. In Addition if your image affected by some noises and you want to recognize it, again you can find the original image from Hopfield exactly.
I would never recommend feeding a whole image to a NN. If you do so you are feeding mere data (large-dimensional data) which increases the topology on the NN and doesn’t improve learning. It is more meaningful to feed the NN with information extracted by applying various feature extraction techniques. The type of feature extraction depends on your problem and the nature of your images. If the images are the same then the features should be the same, but you will have more robust NN structure that can accommodate higher-resolution images.