Windowed Optimization for Stereo Visual Odometry Fusion
Sensor fusion; stereo visual odometry; graph optimization
Accurate motion estimation is an essential task accomplished in autonomous mobile agents' navigation (aerial or ground) and in robots navigation. In this thesis, we propose to reduce the error of stereo visual odometry when using 6 degrees of freedom poses through a graph optimization-based visual odometry fusion approach using the redundancy of captured information from the environment. Our approach uses two stereo images sets of a public dataset captured with a moving platform mounted on the top of a robot Pioneer 3AT to compute independent stereo odometry employing the LIBVISO algorithm and later fuse them. Our results are compared against two recognized SLAM frameworks ORB-SLAM2 and UCOSLAM and with the stereo odometry algorithm input. The relative pose error of the fused poses decreases by up to 94\% in relation to the error of stereo odometry and by up to 91\% compared with the results of UCOSLAM. Our implementations are open source and use public libraries.