Multi tracking camera support

Good afternoon everyone,

I have been working in something recently related with the T265 cameras. It has called my attention that sometimes when the features are very far, the position estimation is very inaccurate. However, we have found that being able to adding multiple cameras may help a lot since let’s say when there are no features in a camera facing backwards there may be in a camera facing forward. Checking the source code, it is clear that only one instance of vision odometry is supported since the AP_VisualOdom is coded as a singleton. Also, no long ago, it is my understanding that it was not possible to connect more than one camera T265 to a companion computer because apparently this was not supported by the realsense library.

How crazy does it sound to try to integrate more than one camera as affinity in the EKF3? I gather this would be the right place considering that for the position something similar is done for multiple GPS.

In addition, if we would like to work on this, what would you recommend for the Mavlink messages? currently, everything is transmitted using VISION_POSITION_ESTIMATE and VISION_VELOCITY_ESTIMATE which do not have any field that allows to specify an ID for identifying which camera is sending the information. It seems that the easiest way will be using two separate serial ports, and define two separate ROS namespaces in the companion computer for keeping both cameras separated.

I will appreciate your comments, so far the 4.1.0-beta5 has worked very good during the last weeks. Thank you for making this great software possible!

1 Like

Hi @sbaccam,

Adding support for multiple T265s sounds like a good idea so I’ve added it to the T265 to-do list for tracking at least.

Re how to tell the two cameras apart if their data arrives on the same serial port, I think it might be best to add a mavlink extension to the VISION_POSITION_ESTIMATE and VISION_SPEED_ESTIMATE message called “instance” (or similar). Another way to do this would be to send the mavlink messages using different component ids (all mavlink messages have a system id and a component id) but I think the extension sounds better.

We would then need to extend AP_VisualOdometry to support multiple instances so that the orientation and position (on the frame) for each camera can be specified.

The final question is how to integrate the data from the two cameras into the EKF. So that could be done in a number of ways:

  1. use “affinity” so each EKF core uses a differnet camera, we leave it up to the AHRS/EKF core switching logic to decide which of the two estimates is better
  2. blend the positions and velocities
  3. switch between the cameras using some kind of metric which could be based on the innovations or the tracking accuracy reported by the camera (which can’t really be trusted actually)

For GPS we support all 3 of these methods…