Tuning Loiter on copter using external position estimate (i.e. NonGPS)

This is a new topic to capture @Subodh_Mishra’s question about tuning Loiter when using external position estimates. The original question is below:

@rmackay9, I am using external nav data for indoor navigation and at the moment I just want to hover at one point using LOITER mode. The plots shown below are for Roll and Pitch respectively and as you can see the desired Roll and Pitch are oscillatory even though I am not using the sticks at all!. Can you please tell me how this can be fixed or just point me to the correct params to tune? There is an overwhelming number of tuning knobs and I feel I am stuck in a quagmire when I start playing with them
image

image

Subodh_Mishra, can you provide the original .bin log?

The vehicle doesn’t oscillate in AltHold mode of course?

I suspect the issue is that there is a lag in the position estimate data and this is leading to an oscillation in the position estimate which is then turning into an oscillation in the position controller (i.e. Loiter). I thought that I had seen a PR from @chobitsfan to add a parameter to allow the user to specify the delay but I can’t find it anywhere. Another (better) solution is to align the clocks between the sending system and AP. @peterbarker and @fnoop have done some work on this… we should probably finish this off.

1 Like

Hey @rmackay9. thanks for your response.
No there is no oscillations in ALT-HOLD, but of course the quad drifts away in ALT-HOLD. In LOITER it oscillates but stays withing certain limits and I don’t have to use the sticks to steer it back to the field of view of the aruco markers pasted on the ground.

The BIN file for the above plots can be found here . I will be glad if you can tell me how I can align the clocks b/w ROS(the sending system) and AP.

@chobitsfan created a PR which adds support for specifying the lag. how do you feel about compiling up this version and putting it on your vehicle and setting the new parameter to perhaps 100ms?

1 Like

Okay, let me try this out. I will let you know how it goes. Thanks again!

@rmackay9, I am able to compile the code after manually adding the changes in the PR linked above. Is there any way to know the actual delay?

@Subodh_Mishra
member the ‘‘glass to glass’’ test : Showing video capture screen of your camera looking at a millisec timer on the PC screen ?

Okay I think I understand what you are saying. So that would give me the delay b/w the image transmission timestamp and the image reception timestamp, right?

Yep, I think I showed a picture of it a while ago on gitter

Yes I do remember. I may have to scroll up. It was about 3 months ago.

To figure out the lag it might be useful to point the drone north and move the drone back and forth and then look at the dataflash logs and compare the curve from the accelerometers to the external position estimate (VISP message). The VISP message positions should appear to lag the accelerometers (the sign could also be reversed)… so you measure the time difference between the peaks of each graph.

1 Like

You are right @rmackay9 with this method you get the complete control loop, not just the video transmission

Is it essential to do this while pointing north? Because I have set the use_compass parameter to 0.

Subodh,

The direction doesn’t really matter, it will just be easier to compare the accels with the position movements if some of the axis are lined up.

1 Like

I tested @chobitsfan PR (external navigation data lag) on my system:

  • little quad (130mm) with RevoMini FC with Arducopter 3.7-dev
  • on the quad there is a Raspberry Pi 0 with Raspberry Cam streaming video to Desktop PC
  • on Desktop PC there is ROS Kinetic with mavros and aruco_mapping
  • the video is streamed with gstreamer and on the PC the node gscam get this stream and publish camera/image_raw and camera/camera_info topics
  • on the PC aruco_mapping (slightly modified by me) subscribe to above topics and publish a camera_pose message to the mavros/vision_pose/pose topic.

I send a SET_GPS_GLOBAL_ORIGIN and a SET_HOME_POSITION messages (similar to this ) before sending vision position estimate.

some relevant parameters on my quad:
AHRS_EKF_TYPE 2
EKF2_ENABLE 1
EKF3_ENABLE 0
EK2_GPS_TYPE 3
EK2_POSNE_M_NSE 0.1
EK2_EXTNAV_DELAY 80
GPS_TYPE 0
COMPASS_USE 0
VISO_TYPE 0

I can say that the PR has improved the stability on indoor loiter, even if I had a bigger improvement setting EK2_POSNE_M_NSE to 0.1 from the default value of 1.0.

If I can suggest some improvement for the external position estimate system I would say to implement a ring buffer (instead of a fixed lag) to use the pose estimation with the timestamp closer to the imu measure timestamp to feed the EKF. Something like proposed by @fnoop on his PR for PrecLand (AC_PrecLand: Add backend timestamp and match sensor frame to inertial frame by fnoop · Pull Request #9020 · ArduPilot/ardupilot · GitHub).
Something similar was proposed also by @peterbarker on a closed PR (GCS_MAVlink: vision-position-estimate support ignores message timestamp by rmackay9 · Pull Request #8062 · ArduPilot/ardupilot · GitHub).

This should be useful especially for system that like mine that use WiFi (variable latency) as link between the sensor (camera on the drone) and the PC that do the computation stuff for position estimation. Using the correct timestamp (the time at which the image is captured) it should be possible to fuse the pose estimation data with the imu data measured at correspondent time.

Another useful improvement could be to have the possibility to set the camera offset both position and orientation as proposed by @rmackay9 on this issue (Copter/Rover: vision-position support should be moved to AP_VisualOdom · Issue #8061 · ArduPilot/ardupilot · GitHub)

5 Likes

Thank you very much for detailed testing

1 Like

I moved from aruco_mapping to aruco_gridboard for the pose estimation in my system. The detection of an Aruco Board give a better pose estimation respect to single Aruco Marker.

In the following plot x, y and z for an autonomous flight in guided mode with 4 setpoint in a square of 0.5m side:

In the following plot x, y and z for an autonomous flight in circle mode:

I think there is much to improve but I have made some progress :slight_smile:

3 Likes

The videos relative to the plots posted above:

1 Like

Nicely done on the testing and feedback! yes, I agree that instead of a fixed lag we should use the timesync messages to align the clocks and then use the real time when feeding into the EKF. I’ll talk with @peterbarker to see whether he could revisit that PR. Might need some help from Tridge too.

1 Like

@rmackay9, I find it really difficult to find the difference between the peaks of acceleration and position to find the delay because the data coming from the aruco detection is noisy. Besides this I tried using @ppoirier 's method of taking a screenshot of a screen displaying an onscreen timer and the video feed of the same taken from the skyviper’s camera. The delay is not constant and from the 10 screenshots I took, they vary from 27 ms to 35 ms, the average would be about 34ms. Is I am correct this would be the estimation delay, do you know what could be the control delay? As this doesn’t depend on the vision data I presume that this would be common for most platforms. Forgive me if I am wrong. I am looking forward for your insights.