There are some issues regarding classification. It can be done in terms of information from image acquisition hardware such as cameras or work with images organized semantically.
I will assume your images came from a slide.
The features will be obtained from the images and segmentation can find several elements within your images. Segmenting gives "building blocks" that must be classified according to some rules to make sense. White blood cell tend to be big blobs compared to red blood cells as seen in the enclosed illustration from
There are some issues regarding classification. It can be done in terms of information from image acquisition hardware such as cameras or work with images organized semantically.
I will assume your images came from a slide.
The features will be obtained from the images and segmentation can find several elements within your images. Segmenting gives "building blocks" that must be classified according to some rules to make sense. White blood cell tend to be big blobs compared to red blood cells as seen in the enclosed illustration from
@Maria Aparecida De Jesus have a very good explanation in layman's terms.
I would like to highlight the importance of performing pre-processing before segmentation. Even when people use deep learning, the results can be disappointing if the slides have noise, illumination differences or if the structures undergoing analysis are too small. In some cases, the type is microscope can introduce distortion.
Still I have one doubt... suppose I got segmentation accuracy as 93%... how this accuracy going to affect classifier model? Segmentation is helpful before classification?? The features extracting from the ROI and directly from the image have any differences?
If the segmentation is faulty, it will incorporate neighboring features from some other class that will contaminate the data and will impact in making a wrong classification decision.