In the development of a Co-registration method to compare two 3D MRI exams (before and post chemotherapy treatment for one patient using the same MRI modality) ==>you can see the problematic on the image uploaded enclose
I would take random generator to produce for example 100 coordinate sets. Each of these sets contains rotation information in all three axes and shift information in all three axes. Then I would apply the rigid Affine transformation to your data set according to generated coordinate sets. You can then co-register your newly transformed 100 images. If you set landmarks on both the reference and the transforming image, you can calculate the error as RMS from the coordinate differences between the reference image and the transformed image. The result is 100 RMS values.
You can also co-register after thresholding up to the tumor level, and then apply the saved transformation matrix on the full image. It could help if the software you use can't handle the complexity in the images.
Good question(s)! There are a few things to address here... First is the matter of co-registering the images. You can use manual landmarks to register images (thin plate spline, etc.) but it seems like you would have to create a large number of corresponding landmarks. The community has developed several automated registration algorithms that could be of interest to you as well if throughput is important to your research.
Since you have more localized warping than an affine model could handle, it seems that you may want to use a more powerful registration technique. I would suggest looking into ANTs for a good collection of powerful registration methods (rigid, affine, elastic, diffeomorphic, etc.) that are easy to use. If you want more control over the procedure, maybe you can get familiar with ITK (which ANTs is based on) to make use of their massive API. Most registration software uses algorithms from ITK and its sister package, VTK to visualize data. Matlab also has some diffeomorphic registration functionality as well as the standard rigid, affine, and elastic transformation models.
Next is the validation part. You can validate several things as a result of co-localization, but I'll just mention two items here.
There are lots of metrics one can use to validate registration accuracy. Mutual information (Mattes) is a good metric; general enough to give good results for 2 images of same or differing modalities (e.g. MRI - PET, MRI - CT, etc). Since you are using the same modality, cross-correlation is reasonable too, and there are several others. ITK provides you the option of using different metrics with any of their registration models.
If you are seeking a validation method to assess the tumor size via a comparison of the pre and post treatment images, you should register the after image (moving) to the before image (fixed).
The analysis of treatment effectiveness can be explored a number of ways as well. You can parcellate the brain into different ROIs (using an anatomical atlas) and voxelwise approach on the segmented ROIs. You could perform whole-brain analysis with statistical methods. Piggybacking on Denny's suggestion to threshold up to the tumor value, there are many metrics such as the DICE coefficient, Jaccard Index, Mean Overlap Coefficient, etc. that can give you similarity result for two binary datasets.
Thank you very much, your answer carries interesting information. To explain more my research problem, I compare two breast MRI scans for the same patient who had her chemotherapy. let us suppose that the size of the tumor does not change! in fact, after one chemotherapy session generaly, the size of the tumor don't change. But the change takes place at the intra-tumor level, ie, the spatial relationship between tumor voxels changes (heterogeneity). This heterogeneity interests us in order to predict whether the tumor has responded to the treatment of chemotherapy or not.
Back to the registration that allows to aligne both volumes to provide a good voxel by voxel comparaison. Therefore, this stage of treatment is considered to be interesting for the rest of image processing follow-up. So, two well aligned 3D tumors allows us to provide a good comparison and know what has changed at intra-tumor level.
I currently use an affine registration based on 12 degree of freedom (DOF): (TranslationX, TranslationY, TranslationZ, RotationX, RotationY, RotationZ, ScaleX, ScaleY, ScaleZ, and SkewX, SkewY, SkewZ). Vsually, and from anatomical point of view, the co registration is good! but I try to validate the co registation with more robust methods (which you have cited), I'm loking for how to use them.
Probably the most reliable and clinically most relevant way of verifying registration results is to ask experts to identify the same landmark points in both the fixed and moving image and then compute the distance between the fixed and transformed moving points.
Similarly, you can segment relevant structures in both the fixed and moving image and compare the fixed segments with transformed moving segments. Hausdorff distance (mean, 95th percentile) is a good metric, tells you how well the surfaces of the segments are matched. Some people use Dice Similarity Coefficient for segment comparison, which is a very poor metric, as its value is highly dependent on shape of segments (but it's very easy to compute, probably that's why many people still report that in papers instead of Hausdorff distance).
Both point and segment based evaluation can be performed in 3D Slicer (http://www.slicer.org - free, open-source medical image visualization and analysis software).
For segment-based evaluation, install SlicerRT extension and use Segment Comparison module (it can compute both Hausdorff and Dice metrics).
For point-based evaluation, you create two markup fiducial lists, apply the computed transform to the moving list, harden the transform, and compute distances in Python - or save transformed point positions to file and compute distances in Excel, R, etc.
If you have any questions, post it to http://discourse.slicer.org, you typically get expert help within a few hours.
Regarding the reliability of registration methods, keep in mind that it was brain MRI which drove most of development of registration methods for ITK, ANTs and such. In general, breast registration is much more difficult problem because of lack of anatomically defined reference. I don't think you would get to the point of voxel-by-voxel comparison for breast MRI, with current techniques. I would definitely try to design features for each tumor such as GLCM texture in the original image space, and then compare those features instead of directly comparing voxels.