Hello everyone,

I have two problems:

The first is:

I want to segment the color image of an object using the depth image, the problem is that the images were acquired by two different cameras from the same sensor so 2 different views and when I want to use the depth image for segmentation I get a problem at the alignment level.

The second problem:

for some sensor the acquired images (color and depth) have different resolutions (like the kinect V2) so I would have 2 images with different sizes and for segmentation it is not possible and changing the size of the image is not the right solution since the number of information for one image is larger than the other.

I will be grateful if anyone has an idea about these two issues and thank you very much for your help.

Similar questions and discussions