Regarding the position control: Intel T265 drifitng in loiter mode

Good morning everyone,

We tried today to fly with the Intel real sense T265 as yaw source and position source for the first lane of the EKF3. We started the flight in altitude hold mode and then we switched to loiter mode. Then we noticed the vehicle started to drift in XY, so we switched back to altitude hold mode for avoiding to crash. We repeated the procedure several times during the same flight, and the result was the same. From the logs, it looks like a control issue since PX does not track TPX, neither PY tracks TPY as it can be seen below:

Figure 1, Target position in X and actual position in X

Figure 2, Target position in Y and actual position in Y

The velocity control does not look great either as shown below:

Figure 3, Target velocity in X and actual velocity in X

Figure 4, Target velocity in Y and actual velocity in Y

Please find the log below:

Before try to re-tune the controller, which I would like to avoid since these gains work decently when used with the GPS and we want to try the lane switching, I have some questions:

  1. Even if we enable EK3_SRC1_VELXY = 6 (ExternalNav) and EK3_SRC1_VELZ = 6 (ExternalNav) as suggested in the documentation, it seems that it does not have any effect since the ROS node sends the VISION_POSITION_ESTIMATE message which does not contain velocities. Is it included in the vision_to_mavros node a parameter or a different launch file that can send the ODOMETRY message instead since it does have velocities?
  2. I noticed in the logs that the control gains of the Position North and Position East loops are changing a lot during the loiter mode operation as shown in Figure 5. What are the reasons for these changes? As @amilcarlucas mentioned the Dmod is one reason but this time is zero for both control loops, and @rmackay9 mentioned battery voltage compensation, hover throttle learning, motor thrust scaling and altitude compensation. However, I wonder if they can have such an aggressive impact in the outer control loops as the ones associated with the position XY.

Figure 5, Proportional gains for position North and position East control

By the way, the yaw corrections of the camera works perfectly fine even when is windy and the vehicle tries to yaw because that.


Thanks for giving 4.1 and the T265 integration a try.

I think the issue is much more likely to be an estimation issue rather than a control problem and the most common cause is from the camera itself caused by:

  • vibration. the camera is very sensitive to vibration so ideally it should be vibration isolated
  • low light or not enough features within 15m

I suspect you already know about these weaknesses but just in case.

I think it should be possible to confirm it is an estimation issue by comparing the vehicle’s position on the GCS map vs where you visually see it actually is. I think in this log the EKF origin wasn’t set so the vehicle may not appear on the GCS map. If you’re using Mission Planner you can set the EKF origin by right-mouse-button clicking on the Data screen’s map and select “Set Home Here >> Set EKF Origin Here”.

I guess you’re also picking up the vehicle once and putting it back down again before flight? The T265’s tracking confidence will become “High” if it’s picked up once before flight.

Hope this helps.

1 Like

Thank you @rmackay9,

We tried today moving the drone with the camera around, without flying after setting the home and the origin using the Python script of vison_to_mavros package. It seems, as you mentioned, that there are no enough features when the camera faces certain parts of the testing environment. For corroborating, we also tried position hold mode, and facing certain areas it held the position well while in other ones it did not.

I am curious about something, since we use the ROS node of @LuckyBird, we do not get the message VISION_POSITION_DELTA, something we corroborated in the Mavlink inspector of QGC and in the source code of vision_to_mavros . This message looks pretty important since it contains the confidence parameter of the camera which can be used for detecting bad position estimation. I gather this is very important since even when the camera is not providing a good position, the innovation of position north and position east do not look bad at all in the Kalman filter for having a high score which will make the lane unhealthy:

I am also thinking on the scenario in the future where we want to switch automatically from an unhealthy VIO source similar to the example you have ( ahrs-source.lua). It seems that you are using the innovation of velocities for determining if switching or not to VIO navigation. Currently, we do not receive neither the velocities (VISION_POSITION_ESTIMATE does not have velocities) nor the confidence of the camera. Thus, I guess we need to find a way to get either velocity corrections for having innovations or the confidence of the camera in order to detect a bad position estimation from the camera, is that correct?


Sounds like good progress!

The vision-position-delta message should not be sent if vision-position-estimate and/or vision-speed-estimate are being sent. A number of people have sent both and it doesn’t cause any catastrophic problems (e.g. no crashes so far) but sending both is not how it was intended to be used.

We don’t need the vision-position-delta message’s “confidence” field because the vision-position-estimate message has an “extension” (i.e extra fields) for passing down a covariance matrix and a reset counter and AP uses these. To be more specific from the covariance matrix it consumes the 0th, 6th and 11th elements as the expected position error and the 15th, 18th and 20th elements as the expected angle error (see code here). For the reset_counter field, any change is detected and causes the EKF to reset (e.g. jump) to the new position.

The vision-speed-estimate messages also have the same extensions and AP consumes the reset_counter but not the covariance matrix. I think we could pretty easily add support for consuming the covariance matrix but I don’t think it’s necessary because AP has parameters to allow you to set the expected noise (aka expected error) called VISO_POS_M_NSE, VISO_VEL_M_NSE and VISO_YAW_M_NSE.

To answer your question more directly, it definitely helps if you can get the velocities flowing into AP as well using the vision-speed-estimate message. This is quite important both for overall performance but also for, as you say, automatically switching between sources using the velocity variances.

By the way, the T265’s confidence reporting is not very useful. It is slightly useful before takeoff because it allows you to see if the camera has completed its initial scaling (try picking the vehicle up and then put it down again and you’ll see it’s confidence go to “high”) but in flight it does not revert to a lower confidence until seconds after things have gone bad. This is why the velocity values are so important… with velocity innovations we can detect failures better.