05 May 2019 11 4K Report

My question is "can we use pre-trained models like InceptionV3 on medical image datasets?" Healthcare data has two major characteristics when using deep learning: 1) often small sample size, 2) different data content from the pre-trained model (e.g., ImageNet).

Recently research discoveries may say that ImageNet pre-trained model has very limited help for some tasks. ImageNet pre-trained model is mainly trained using natural images. Because of the big difference between natural images and medical images (e.g., CT/MRI), we have to fine-tune our networks, which is known as transfer learning. Transfer learning may have a very limited effect when we switch the data content from one type to another. Hence, transfer learning in this case may be no better than training from scratch, as the networks learn very different high-level features in the two tasks. Some statements could be found in 'Rethinking ImageNet Pre-training, Kaiming He'. Certainly, we know if we have enough data, training from scratch is a feasible approach. However, I am standing in the healthcare field. Additionally, we trust that transfer learning could boost the coverage time, but how about the performance?

I here ask all experts' help to explain to how to solve the current challenges in the healthcare field: 1) small sample size 2) different data content from pre-trained model.

Similar questions and discussions