The goal of this project is to build a small, affordable and easy to setup VIO system which can be carried by small drone. Usually, powerful CPU or GPU is required for a VIO system because of image processing and feature tracking jobs. In this project, I will offload computer vision jobs to OAK-D and let Raspberry Pi focus on pose estimation. It is inspired by Sara Lucia Contreras Ojeda’s thesis “Robot pose calculation based on Visual Odometry using Optical flow and Depth map” and SLAM with OAK from Luxonis web site.
Hardware:
Luxonis OAK-D (I think OAK-D Lite is lighter and cheaper therefore more suitable, but I only got a second-hand OAK-D here)
Raspberry Pi 4
Software
ArduCopter 4.4.4 is used here
Slight modified VINS-Fusion
A small programm processing feature tracking result from OAK-D
Another small program sending VIO data to ardupilot
Ardupilot setup
SERIAL1_PROTOCOL 2
SERIAL1_BAUD 1500
EK3_SRC1_POSXY 6
EK3_SRC1_VELXY 6
EK3_SRC1_VELZ 0 (you can set it to 6 after testing result stable enough)
EK3_SRC1_POSZ 1
EK3_SRC1_YAW 6
VISO_TYPE 1
VISO_POS_M_NSE 0.5 (you can lower it after testing result good enough)
VISO_VEL_M_NSE 0.5
VISO_YAW_M_NSE 0.3
VISO_DELAY_MS 60
Thanks again for this, it’s very interesting. I think maybe we should consider putting this on the wiki as a alternative/replacement for the T265 support we have now that the T265 is no longer available.
One of the key issues of the T265 (and also the ModalAI VOXL) is they can behave very badly when they start losing their position estimate. Any opinions on how this system works when things go wrong?
I guess this uses the distances from the OAK-D camera meaning that it only work well if there are objects that it can track within about 15m or so?
BTW, I have been thinking about how to create a VIO system that works at high altitudes for cases where the GPS is lost. I have been thinking of using a downward facing camera gimbal (the Xacti in particular) and then run an optical flow algorithm (running on an RPI4/5) on the video. The idea of using segmentation or feature tracking to improve the optical flow had also crossed my mind. Any advice is greatly appreciated.
I think maybe we should consider putting this on the wiki as a alternative/replacement for the T265 support we have now that the T265 is no longer available.
Thank you, I will try to improve this post.
One of the key issues of the T265 (and also the ModalAI VOXL) is they can behave very badly when they start losing their position estimate. Any opinions on how this system works when things go wrong?
I think it is due to T265 keep sending pose even when it lost vision feature tracking. I am trying to modify vins-fusion source code to avoid this problem
I guess this uses the distances from the OAK-D camera meaning that it only work well if there are objects that it can track within about 15m or so?
Yes.
BTW, I have been thinking about how to create a VIO system that works at high altitudes
Great job with the pose estimation, I’ve been following your previous works with VINS-Fusion and ArduCopter. Did you managed to somehow attenuate the episodes of the pose estimation losing track with changes in the source code?
I’m having trouble with eventual inconsistencies when using a Monocular Camera (20Hz) and RAW_IMU from APM at 100Hz. Perhaps the time synchronization or low imu frequency is affecting the system.
I think maybe we should consider putting this on the wiki as a alternative/replacement for the T265 support we have now that the T265 is no longer available.
Hello, I want to make a bird that can move both inside and outside without GPS Now there is a point 1 I want to use vins but I can’t do anything because I don’t understand what to do 2 Can I not use optical fallow?