Feasibility of Image Processing for Landing

I’m building a UAV that needs to be able to land autonomously. I want to have a sensor on the bottom to help with landing. I’m trying to decide whether to use a LiDAR, or a downward-facing camera.

With the camera, could I do onboard image processing with OpenCV on the ArduPilot flight controller, to guide the landing? Also, in general, what would be the merits of using a LiDAR for landing vs a camera?

It can be feasible with a companion computer. Please refer to the wiki for companion computer. Another way could be using dronecode API…

Lidar is more accurate but more expensive and with some weight. The best intégration I saw is the ebee classic: they put a mouse sensor with a small magnifier glass. The ebee classic landing are the most accurate I saw with reverse motor to lower the impact, even with lidar you will have overshoot … In next generation, I think it was not the same engineers, they put lidar with ton of unsuccessfull landing or crash including total loss.

For plane landing you need height above ground estimate, position or horizontal velocity estimate won’t really help with anything.

HAG estimate can be improved by using a downward facing rangefinder i.e lidar or radar.

In my experience for planes landing at ~10-12m/s with descent rate of ~1.5m/s you will should need a bit more than 10m range rangefinder.

LD2450 which has a range of ~6-7m was testing typically activated around the point our plane starts flaring at.

1 Like