18 June 2020 0 10K Report

Dear experts in MRI processing community,

In clinical practice, tons of MRI/CT images had thickness around 5mm or thicker, sometimes with a certain gap (e.g., 1.5mm) as well. For the use of clinical routine medical images into research, it's very interesting to perform super-resolution reconstruction to take advantage of the different in-plane resolution acquired from each plane orientation and merge them into one ungraded volume with higher resolution.

For example, if I have axial, coronal, and sagittal images across the whole brain in one subject (e.g., 30 slices in each dimension), is there a better way to do fusion/interpolation on the three datasets intramodality intrasubject?

1. Convention method: linear co-registration intra-modalities intra-subject and then merge them.

2. New method: deep convolutional neural networks (DCNNs) based marginal super-resolution (MSR) method (Peng et at., Deep Slice Interpolation via Marginal Super-Resolution, Fusion and Refinement. arXiv, 2019)

Any other recommendations on the pipeline or algorithms will be appreciated.

Thank you very much!

More Shun Yao's questions See All
Similar questions and discussions