Inspired by @anbello 's work, @ppoirier 's work and @LuckyBird 's work. GoodRobots created an excellent project for vision based precision landing. It use multiple fiducial markers. However, when landing on a limit space (for example, landing on a charging station or on a vehicle). Its markers arrangement makes landing spot off-center (drones will land on the smallest marker).
I have a different approach. By utilizing the ability of pose estimation of markers and LANDING_TARGET MAVLink 2.0 extension fields. We can have multiple markers in different size, but allow drones to land on center of landing pad. In this approach, each marker does not treated as the landing target. Each marker provides a landing target offset.
For example, if drones see marker id 1, then landing target is 0.2m down from marker id 1. Multiple small markers can be placed alone the edge of the landing pad, to avoid drones get lost during the final approach stage
I came to your same conclusion about the need of an offset for each marker relative to the landing position, which also adds the ability to handle multiple “active” markers instead of only one at a time.
I’m also thinking in adding extrapolation and a PID controller.
I just posted some issues on the goodrobots vision landing issue section and I’m looking for a vision landing opensource project to contribute to.
I changed your branch, but I didn’t see the computer vision code for tracking the markers. Is it on another repo?
It’s there any documentation for setting up the environment?
And here is the final result of merging @fnoop’s (GoodRobots) Vision Landing with @chobitsfan’s alternative AprilTag approach, plus other improvements: