21 April 2014 4 5K Report

I have been studying LF-imaging for a while now and frankly to me it seems that any imaging setup is a light field imaging setup. Even single camera is a light field sensor, sparse yes, but LF-sensor. One can always "fly around" a static scene with a single camera and make an model. How one should define the stage where light field imaging is different from multiple view imaging (or rendering). Is it the need for calibration (ie sensor denseness)? Or is there any difference? Are we just talking about the same thing with different labels (which is a bit pity for some fresh machine vision students..)?

More Sami Varjo's questions See All
Similar questions and discussions