AC_PrecLand improvements and updates

Hi, this is just a thread to update and discuss improvements to AC_PrecLand.

I’ve opened two PRs to address ‘Psycho Wasp Syndrome’, which is what happens when PrecLand is attempted with most visual sensors other than IRLock, which is seemingly what the initial implementation was tuned to. The problem is essentially that by the time the camera image is taken, processed by the OBC (OnBoard Computer = Companion Computer), sent to the autopilot, received and processed, the delay is much longer than the expected (hardcoded) 20ms, sometimes as much as 10 times or more on a slow computer. In this case, attitude correction may be slightly or wildly innacurate leading to movements ranging from slow correction to wild and dangerous movements.

Initially this was addressed by PR Unfortunately my innate talent to destroy anything with github hit and the PR was borked, and given lack of activity I gave up on it. So I recently returned to it and split the fix out into two separate PRs for clarity and ease of review.

This first PR adds a new parameter: PLND_BUFFER. It converts the previous static AP_Buffer whose size is fixed at compile time, to RingBuffer which is dynamically created (in this case from PLND_BUFFER parameter). This allows us to change the currently hardcoded 20ms sensor latency (tuned for IRLock) to eg. 50ms for a faster computer/vision based sensor, or 200ms for a slower computer/vision based sensor.

Update 23/09/2018 - Merged!

The first PR still expects a fixed latency, so you can change PLND_BUFFER to say 100ms for a longer latency, but it is expected to always be 100ms. In slower or more general purpose sensor devices such as an OBC that is performing other tasks, the latency can and often will vary widely - eg. from 30ms to 150ms. So a second PR allows the sensor to send the timestamp along with the positional data, and then tries to match this timestamp with the inertial data, in order to correct the attitude at the time that positional data was captured (or better, the time that the image data was captured before processing). In conjunction with TIMESYNC messages, this can cater for both sensor processing and link latency and jitter.

An additional PR raised to output status from PrecLand. This reuses the Mavlink LANDING_TARGET message to output useful information from the AC_PrecLand state. It is not complete yet because ArduPilot does not yet support all fields of the message, and some information is not yet available in the class data. This PR will be updated, or subsequent PRs raised to correct/improve this.

Another PR has been raised to improve the logging from PrecLand. This is very useful for debugging.

Update 8/8/18: Merged!

This PR adds support for the LANDING_TARGET.type field. This field represents the target type defined as an enum in LANDING_TARGET_TYPE (0=Light Beacon, 1=Radio Beacon, 2=Fiducial Marker, 3=Generic Vision Target). It doesn’t do anything with this data yet, but will be useful in the future (eg. combine with LANDING_TARGET.target_num).

PR raised to support LANDING_TARGET.target_num field. This will be very useful for debugging, to determine which target is detected/active, and for more advanced features like selective landing based on target ID or marker maps.

Both the Mavlink and Logging PRs will be built on, but need the other PRs merged first.

More updates are in progress and will be added here, for now this is a placeholder.


@fnoop Does this mean your visual landing is now safe® to use ? Or does it need the rest of the PRs to be merged to be more functional? Is the IR-Lock approach the safest / only stable option still for precLand and precLoiter ?

No unfortunately not, only a portion of these PRs have been implemented so far. However, the faster your onboard computer the safer it is. I originally used an Intel Joule to do the dev and testing which was able to process at 50hz+ with minimal latency and the results were very good and reliable. The higher the latency/jitter of the visual processing, the less reliable (and more dangerous) the results.
I’ve been so busy that haven’t had time to work on this but I’m finishing a big project next week so hope to then turn attention to the original ‘ultimate’ goal of this project, which was to get vision landing working with a cheap raspberry. And I’ll make (another) push to try and get these PRs implemented then.

Gotcha. This is consistent with another thread where the bottom line was the processing frequency.
Having it working on a casual RPi3 sure would be nice, I may look at the Edison Joule.

Alas the joule is long since dead, however these days you can get even better power from the likes of the Up boards or something similar.

ah – oh well, thanks for saving me the online search trouble lol

@fnoop would a Nvidia Jetson Nano or Odroid do the trick ?

Also – how sensitive to lighting conditions do you think the system is ? Obviously must depend a lot on the camera (recommendations?) But I’m guessing night time is probably a big no no