When applying Random forests classifiers for 3D image segmentation, what is the best practice to deal with the high size of the images in the training and the test step? Is it a common practice to resize( downsample ) the image?
Perhaps it is a normal practice to represent those images or ROIs within them with vectors comprising some statistical features. I did that with RF and SVM. If you end up with a large matrix then you can resort to using dimensionality reduction techniques such as PCA.
Hope that has answered your question to some extent!
Well, I would say that my answer above pertains mainly to 2D images, which can be seen as the projections of your 3D model. I worked with 3D reconstruction but no machine learning was involved.
If you are having memory issues, then try to downsample your images as you stated, although I would highly suggest that you reconstruct and analyse only the ROI (by eliminating all those voxels that are of no interest to your study -clean your projection images-). This is a generic answer as I don't know what exactly your project is all about, but I hope it is helpful or at least it hints you with something.