I recently came across an example in opencv library: https://docs.opencv.org/master/dd/d53/tutorial_py_depthmap.html

where stereo vision is performed using two cameras. Based on captures from both cameras disparity map of distances can be calculated, but it seems quite noisy. I am interested if disparity maps could be improved if additional cameras were installed? How can this be done using just conventional methods (epipolar geometry) without any deep learning techniques? can someone redirect me to literature, source code etc.?

Similar questions and discussions