After reading quite a number of papers on (early) diagnosis prediction for Alzheimer's disease from sMRI/PET using machine learning approaches, I notice that at least almost all works I have read are based on the setting: only use baseline sMRI/PET images for training or testing.

It's confusing that why not use all available images of all visits of the same person? Isn't it a waste of data? Especially when it comes to deep learning approaches.

Example papers I've mentioned:

[1] C. Lian, M. Liu, J. Zhang, and D. Shen, “Hierarchical fully convolutional network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 4, pp. 880–893, 2020.

[2] D. Lu, K. Popuri, G. W. Ding, R. Balachandar, and M. F. Beg, “Multiscale deep neural network based analysis of FDG-PET images for the early diagnosis of Alzheimer’s disease,” Med. Image Anal., vol. 46, pp. 26–34, 2018.

[3] H. Il Suk, S. W. Lee, and D. Shen, “Deep ensemble learning of sparse regression models for brain disease diagnosis,” Med. Image Anal., vol. 37, pp. 101–113, 2017.

More Zichao Zhang's questions See All
Similar questions and discussions