I am currently in the process of constructing an autonomous drone, I have arducopter running on a pixhawk with a rasp pi 4 B companion computer.
I have created a basic visual odemetry algorithm using sparse optical flow that runs on the raspberry pi. This returns a rotation and unit translation matrix for every frame. The issue with this is that I have no scale (not even relative). Each frame gives only the direction of movement with no indication of magnitude.
What is the best way to supply information to ardupilot to allow “GUIDED” mode to be enabled. There are many vision based mavlink messages such as:
VISION_POSITION_ESTIMATE
GLOBAL_VISION_POSITION_ESTIMATE
ODOMETRY
OPTICAL_FLOW
VISION_POSITION_DELTA
These all tend to have little information on them, very daunting for a begginer like myself.
I was thinking of using the IMU data and scaling the translation using that.
Is there a better method available?
Is it possible to supply only the (unit) translation and rotation per frame?
So just to clarify, if i were to do the VO myself, i would need to supply absolute scale (in meters) to ardupilot using my IMU? It’s not possible to send relative (arbitrary units) to ardupilot and have it use that?