The point cloud created from generated (x,y,z) world coordinates, using the Kinect v1 RGB and depth video frames, gives an impression of a plain 2D image instead of a proper 3D model. It seems depth is not properly filled. This may partially be due to the fact that Kinect encodes depth distances too far or too close as 2047 and 0 respectively. Thus, I treat both values as zero and takes them into consideration while generating (x,y,z) and so it affects point cloud. Can anyone provide some insights why depth is not filled up and how a volume blending can be done to have proper 3D model? Any advices, pointers, or links would be highly appreciated. Thanks

More Saurabh Raina's questions See All
Similar questions and discussions