What are the uncertainties one would expect if the same sensor model, say a camera were mounted on different but similar platforms? An example would be the ACC camera mounted on a number of the same vehicles. I would guess there would be small position errors,uncertainties relative to the platforms reference point or CG, but I imagine these would be small and maybe calibrated out, or removed through effective data fusion techniques, yes/no?
Added: An example to hopefully clarify what I'm asking. Say there's a a caravan of passenger cars (all the same make) and each of them has the same set of cameras installed. They all go down the road following the same path, one after the other like a train. What would be the uncertainties associated across the data collected from those cameras due to the fact they they are mounted on different vehicles? I'm thinking there might be some temporal ones, maybe some come from slight differences in their mounting locations (I'm thinking very small, but not 0), some due to the fact that there would be some error because the cars would not follow the exact path. Am I'm missing any?
Thanks...