GPS Deprived Environments: What's the best method for visual odemetry?

I am currently in the process of constructing an autonomous drone, I have arducopter running on a pixhawk with a rasp pi 4 B companion computer.

I have created a basic visual odemetry algorithm using sparse optical flow that runs on the raspberry pi. This returns a rotation and unit translation matrix for every frame. The issue with this is that I have no scale (not even relative). Each frame gives only the direction of movement with no indication of magnitude.

What is the best way to supply information to ardupilot to allow “GUIDED” mode to be enabled. There are many vision based mavlink messages such as:

  • VISION_POSITION_ESTIMATE
  • GLOBAL_VISION_POSITION_ESTIMATE
  • ODOMETRY
  • OPTICAL_FLOW
  • VISION_POSITION_DELTA

These all tend to have little information on them, very daunting for a begginer like myself.

I was thinking of using the IMU data and scaling the translation using that.
Is there a better method available?
Is it possible to supply only the (unit) translation and rotation per frame?

You need to find that scale factor, period. We do that on our closed sourced code, so yes it is possible.

Take a look at this:
Experiment with Visual Odometry - ROVIO

You may try to implement monocular VIO on a RPI4 and hopefully get 10 fps that is the minimal speed to get acceptable flight control

Yeah, don’t try to reinvent the wheel. All has been said and done :slight_smile: IMU integration is here for you: https://arxiv.org/pdf/1708.03852.pdf

So just to clarify, if i were to do the VO myself, i would need to supply absolute scale (in meters) to ardupilot using my IMU? It’s not possible to send relative (arbitrary units) to ardupilot and have it use that?

Hello,

Ardupilot is …an autopilot, and not a state estimator, so you need to provide correctly formed vision_position_estimate message to the system.

1 Like