07 October 2020 3 2K Report

I am training a U-Net for semantic segmentation of large medical images (4096x4096px). The two classes are "too" unbalanced. The white pixels are just about 0.1% (or less) of the whole image. The Dice Coeff loss function seems to not be working since it predicts always black pixels.

  • Is there any specialized loss function for such unbalanced data? I can not find anything that works.
  • Is the U-Net arcitecture suitable for such segmentation tasks?

I have tried to train with the following setup:

Epochs: 50 Batch size: 4 Learning rate: 1e-05 Training size: 451 Validation size: 23 Checkpoints: True Device: cuda Images scaling: 0.25

and also with batch size of 1 and learning rate of 10^4. I would appreciate some help.

Cheers

Similar questions and discussions