TY - GEN
T1 - Inertial-Kinect Fusion for Robot Navigation based on the Extended Kalman Filter
AU - Sang, Xiaoyue
AU - Yuan, Zhaohui
AU - Yu, Xiaojun
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/10/13
Y1 - 2021/10/13
N2 - The robot needs to know the pose in order to maintain stability and execute the walking path. Current solutions either rely on visual data, which is easily affected by the environment and lighting conditions, or integrate kinematics and inertial measurement unit (IMU) measurement data, however, there will be drift problems caused by accumulated errors. Aiming at the defects and stability problems of vision sensor in location, this paper combines vision sensor and IMU to complete the high-precision pose estimation at low cost, designs the combined positioning algorithm based on the extended Kalman Filter (EKF). Specifically, this paper proposes the number of correctly matched feature points and depth error as the judgment conditions of the combined strategy, and uses the IMU data to construct a process model, merges the pose estimation results of the vision sensor, and selectively corrects the vision sensor. The robot positioning experiment was carried out in the indoor laboratory scene, results show that the algorithm proposed in this paper can effectively suppress the positioning stability problem of the vision sensor and improve the accuracy of the pose estimation.
AB - The robot needs to know the pose in order to maintain stability and execute the walking path. Current solutions either rely on visual data, which is easily affected by the environment and lighting conditions, or integrate kinematics and inertial measurement unit (IMU) measurement data, however, there will be drift problems caused by accumulated errors. Aiming at the defects and stability problems of vision sensor in location, this paper combines vision sensor and IMU to complete the high-precision pose estimation at low cost, designs the combined positioning algorithm based on the extended Kalman Filter (EKF). Specifically, this paper proposes the number of correctly matched feature points and depth error as the judgment conditions of the combined strategy, and uses the IMU data to construct a process model, merges the pose estimation results of the vision sensor, and selectively corrects the vision sensor. The robot positioning experiment was carried out in the indoor laboratory scene, results show that the algorithm proposed in this paper can effectively suppress the positioning stability problem of the vision sensor and improve the accuracy of the pose estimation.
KW - combined positioning
KW - Extended Kalman filter
KW - IMU
KW - robot vision
UR - http://www.scopus.com/inward/record.url?scp=85119499018&partnerID=8YFLogxK
U2 - 10.1109/IECON48115.2021.9589073
DO - 10.1109/IECON48115.2021.9589073
M3 - 会议稿件
AN - SCOPUS:85119499018
T3 - IECON Proceedings (Industrial Electronics Conference)
BT - IECON 2021 - 47th Annual Conference of the IEEE Industrial Electronics Society
PB - IEEE Computer Society
T2 - 47th Annual Conference of the IEEE Industrial Electronics Society, IECON 2021
Y2 - 13 October 2021 through 16 October 2021
ER -