Hello all,

I have a question regarding the implementation presented in paper titled "Real-Time Metric State Estimation for Modular Vision-Inertial Systems" by Stephan Weiss.

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5979982

at Epoch 1: The monocular visual odometry starts at origin like IMU (both are aligned with world coordinate system). 

Epoch 2 is shown below, where visual odometry moves along Z-axis (in world frame), IMU moves along X-axis (in world frame) and the position of Camera (in world)

The update & correction will be visual frame (which moves along Z-axis).  For fusion to be successful, If we transform the visual odometry to IMU frame then they will be pointing in the same direction. 

Then our measurement vector Zp would also change or we use the same formula as in the paper?

Thank you very much for your suggestions and answers in advance.

  

More Shakti Gahlaut's questions See All
Similar questions and discussions