This is a new topic to capture @Subodh_Mishra’s question about tuning Loiter when using external position estimates. The original question is below:
@rmackay9, I am using external nav data for indoor navigation and at the moment I just want to hover at one point using LOITER mode. The plots shown below are for Roll and Pitch respectively and as you can see the desired Roll and Pitch are oscillatory even though I am not using the sticks at all!. Can you please tell me how this can be fixed or just point me to the correct params to tune? There is an overwhelming number of tuning knobs and I feel I am stuck in a quagmire when I start playing with them
Subodh_Mishra, can you provide the original .bin log?
The vehicle doesn’t oscillate in AltHold mode of course?
I suspect the issue is that there is a lag in the position estimate data and this is leading to an oscillation in the position estimate which is then turning into an oscillation in the position controller (i.e. Loiter). I thought that I had seen a PR from @chobitsfan to add a parameter to allow the user to specify the delay but I can’t find it anywhere. Another (better) solution is to align the clocks between the sending system and AP. @peterbarker and @fnoop have done some work on this… we should probably finish this off.
Hey @rmackay9. thanks for your response.
No there is no oscillations in ALT-HOLD, but of course the quad drifts away in ALT-HOLD. In LOITER it oscillates but stays withing certain limits and I don’t have to use the sticks to steer it back to the field of view of the aruco markers pasted on the ground.
The BIN file for the above plots can be found here . I will be glad if you can tell me how I can align the clocks b/w ROS(the sending system) and AP.
Okay I think I understand what you are saying. So that would give me the delay b/w the image transmission timestamp and the image reception timestamp, right?
To figure out the lag it might be useful to point the drone north and move the drone back and forth and then look at the dataflash logs and compare the curve from the accelerometers to the external position estimate (VISP message). The VISP message positions should appear to lag the accelerometers (the sign could also be reversed)… so you measure the time difference between the peaks of each graph.
I tested @chobitsfan PR (external navigation data lag) on my system:
little quad (130mm) with RevoMini FC with Arducopter 3.7-dev
on the quad there is a Raspberry Pi 0 with Raspberry Cam streaming video to Desktop PC
on Desktop PC there is ROS Kinetic with mavros and aruco_mapping
the video is streamed with gstreamer and on the PC the node gscam get this stream and publish camera/image_raw and camera/camera_info topics
on the PC aruco_mapping (slightly modified by me) subscribe to above topics and publish a camera_pose message to the mavros/vision_pose/pose topic.
I send a SET_GPS_GLOBAL_ORIGIN and a SET_HOME_POSITION messages (similar to this ) before sending vision position estimate.
some relevant parameters on my quad:
AHRS_EKF_TYPE 2
EKF2_ENABLE 1
EKF3_ENABLE 0
EK2_GPS_TYPE 3
EK2_POSNE_M_NSE 0.1
EK2_EXTNAV_DELAY 80
GPS_TYPE 0
COMPASS_USE 0
VISO_TYPE 0
I can say that the PR has improved the stability on indoor loiter, even if I had a bigger improvement setting EK2_POSNE_M_NSE to 0.1 from the default value of 1.0.
This should be useful especially for system that like mine that use WiFi (variable latency) as link between the sensor (camera on the drone) and the PC that do the computation stuff for position estimation. Using the correct timestamp (the time at which the image is captured) it should be possible to fuse the pose estimation data with the imu data measured at correspondent time.
I moved from aruco_mapping to aruco_gridboard for the pose estimation in my system. The detection of an Aruco Board give a better pose estimation respect to single Aruco Marker.
In the following plot x, y and z for an autonomous flight in guided mode with 4 setpoint in a square of 0.5m side:
Nicely done on the testing and feedback! yes, I agree that instead of a fixed lag we should use the timesync messages to align the clocks and then use the real time when feeding into the EKF. I’ll talk with @peterbarker to see whether he could revisit that PR. Might need some help from Tridge too.
@rmackay9, I find it really difficult to find the difference between the peaks of acceleration and position to find the delay because the data coming from the aruco detection is noisy. Besides this I tried using @ppoirier 's method of taking a screenshot of a screen displaying an onscreen timer and the video feed of the same taken from the skyviper’s camera. The delay is not constant and from the 10 screenshots I took, they vary from 27 ms to 35 ms, the average would be about 34ms. Is I am correct this would be the estimation delay, do you know what could be the control delay? As this doesn’t depend on the vision data I presume that this would be common for most platforms. Forgive me if I am wrong. I am looking forward for your insights.