So I think in the next week or so you’ll be able to input proposals through the Google GSoC website (i’m slightly unsure of the details of how that’s done).
In that proposal you should give some ideas of how it would work and how you see it being integrated with ArduPilot.
For me (I’d probably be the mentor on this), some key points that would make it proposal good:
- use readily available and free open source tools
- the feature should be basic but useful. So for example the two most obvious applications of machine vision/learning are driving along a road, or recognising a safe landing spot. Maybe there are other too though.
- the feature should be written and documented in such a way that others can build upon it
- include a diagram if possible
If you’re not too familiar with ArduPilot, here’s our “companion computer” wiki pages. A companion computer is an extenral computer that doesn’t handle the main flight code but instead it talks to the flight controller using a serial protocol called MAVlink. So the expectation is that the vision stuff would run on a TX1/TX2 (or some other companion computer) and then it would send commands or data to the flight controller. If it was sending commands it might use the SET_POSITION_TARGET_GLOBAL_INT to tell the Rover or Copter where to go.
Another option is we could send data to ardupilot. So for example we could create a new mavlink message called “SAFE_LANDING_POSITION” so that the companion computer can tell the copter where it thinks it is safe to land and then the main ardupilot code could record that in a database or something.
I’m actually not a machine vision/machine learning expert so if you’ve got better ideas for an interesting application we are totally open to that. Remember we support lots of different vehicle types: Rover, Boats, Copter, Planes, Antenna trackers… etc