RTL dead reckoning using camera

Hello,

I have a following use case:
I have a plane that is flying lets say 10+ km from home point. It loses both GPS signal and RC link (due to hardware failure or interference). I would like the plane to return to Home position as accurate as possible and land on it’s own.

I did some investigation around ardupilot wiki and code base and from what I understood the following would happen: plane will enter long fail-safe action (RTL) and it will try to return home using EKF without GPS to estimate position (please correct me if I’m wrong). However due to position errors accumulation it’s not possible to implement an accurate navigation to Home point.

So my proposal is the following:
Let’s say we have an onboard camera that is looking straight down and takes pictures every ~50 m of flight path. We can create a background thread that is using ORB feature detection algorithm from opencv library that will create a feature/descriptor record from every taken picture and assosiate it with current plane position. So in result we would have a database that contains image descriptors taken from a known position. In case of GPS failure we can search this database using a current camera picture and match current image against places that plane has already been (so we’ll try to use taken pictures as breadcrumps that will lead plane home) by finding several matching descriptors we can find homography between 2 images and use this offset as current plane position (since 1st picture has a position data assosiated with it).

By using this system I hope that it will be possible to improve dead reckoning navigation accuracy.

What do you think about this idea and do you think it’s doable and worth implementation?

aka SLAM. I suggest you ping the folks in theVisionProjects gitter channel

Thanks for pointing me in the right direction :slight_smile: