I am working on an image processing project where my first task is to classify a set of images as blurred or non blurred.

I am using only Python for implementation and the current status is as mentioned below.

I have generated the feature vector for a set of images (more or less 300 images) using color histogram and saved the feature vectors for each image line by line in a text file. 

In this set of images there are two types.

Few images are perfectly fine ( images were captured well) so no need to process them. But there are images wherein some part of the image is affected by high intensity of light while capturing the image. However this set of images carry useful information as well. These images can't be discarded instead I need to remove the blurred region present in such images and use the remaining part of the image which carries useful information. 

It is not well said in prior that which images are blurred and which images are not. I can check manually but its infeasible. 

Could someone help me to classify blurred and non blurred images at the first step?

Thank you very much for your kind attention.

Similar questions and discussions