GSOC 2019 : Cheap, Plug and Play Visual Follow-Me Solution

Hello. I plan on submitting a proposal for GSOC 2019 but would like to have some inputs regarding to the idea I am going to propose.

The idea is heavily based from the idea list, which is to use low-cost camera modules such as JeVois or OpenMV for onboard following specific objects. Suggested AprilTags would be enough for proofing-of-concept, but more sophisticated detection and tracking should be possible if the hardware and its specific software allows. Since these kind of hardware provides means of duplex serial communication, they can be programmed to send MAVLink velocity commands and connected directly to FC, hence its PnP feature as long as it complies with ArduPilot.

Several things that I have thought would needs more usability inputs are as follows :

  1. Duplex serial communication means these camera module could retrieve several new parameters from ArduPilot (preferably at startup so it would not interfere with its vision processing), which is the main actual contribution to the codebase. Features that I’ve thought so far to be included in these parameters:
  • Range limitation to determine ideal distance of following. This is naively solvable by providing range of the object scale related to the camera frame. Other options would be providing actual ideal distance in meters and camera type, assuming the camera specifications are open, we could derive the distance based on the object scale.
  • Following behavior, for now it would be either aggressively chasing the ideal distance or merely keeping the vehicle inside the following distance range.
  • Selection behavior when several objects of interests (tags, for example) shows up at startup, for now it could be random, farthest or closest that within selected distance range.
  • Other behavior that suffice the following issues
  1. The startup and termination of this follow-me feature so far is thought to be handled in GUIDED mode, simply turn the GUIDED mode on to run this feature and vice versa. Would this be enough, or are there use cases that requires this feature to be slightly more, automatic? I’ve though that such automatic use case (e.g. automatic mode switch if nothing’s detected for some time) should be implemented in form of another flight mode, but I wonder whether it is a necessity in this idea.

  2. Supplying of the camera firmware from this project. Since they didn’t have internal WiFi, and their internal SDCard, if any, would be dedicated to supplying the firmware itself, I think that it should not be on APSync. Would supplying the source code, its documentation and guidelines sufficient?

Any inputs especially would-be use cases expected from this idea is most welcome. Thank you for reading!

2 Likes

This is an interesting field to explore, as there is still a lot of work to get these device easy to integrate with ArduPilot.

You can take a look at my experiments
Red Balloon detection and tracking using OpenMV QuadCopter Object Tracking on a budget

Precision Landing using the JEVOIS and ARUCO code as demonstrated here: https://www.youtube.com/watch?v=9PHOhTJ3eSI&t=1s
The RPI zero Python Script is here: https://github.com/patrickpoirier51/JeVois--Python-Tracking

Getting these scripts into OpenMV and Jevois , hence not having to use a Companion Computer would be a great start. On a second phase , You could certainly implement more advanced detections based on simple Neural Networks that are now available on both platforms, to control a basic Visual Follow-me solution.

OpenMV Cam H7 supports loadable neural networks and is capable of running simple CNNs like LENET-6 for number detection. Neural network support is provided by ARM’s new [CMSIS-NN library]

Jevois TensorFlow

1 Like

Sorry for the late reply, thanks for all the heads up @ppoirier. Definitely a ton of resources already.

My first steps toward this would be making the environment such that the JeVois and OpenMV could directly send Mavlink messages.

In OpenMV, this could be done by simply importing python libraries directly in the IDE (supported in late 2018) and writing generated commands via its UART API.

JeVois environment is a bit more trickier since the usual workflow is to use predefined serial message format defined here. Sending direct Mavlink commands would require inserting C Mavlink libraries and making additional modules with functions specifically for writing command messages using serial APIs from here.

I’ll be reading up more into setting custom Tensorflow networks on JeVois and how CMSIS-NN works.

I remember having chatted on the JEVOIS forum about MavLink implementation. It seems that jevois.org is down at the moment , so I cannot reference the discussion, but I think this is the code:

As for OpenMv the part of the Mavlink Stack aready been implemented

You can check some work active here from DomenicoPatella @DomenicoPatella

Oh well, should have worked on my search scope before I write things out. Thank you for the notice.

With MavLink integration is certainly proved on OpenMV, I might have to shift the focus into making it clean by importing the libraries properly as it currently hardcoded right now… and moving out to implementing NNs should be moved up in the schedule

Some quick skim shows that alsaibie’s implementation took a different approach from mine such that he doesn’t touch the core libraries, certainly will read more about that. Was there any problems so far?

There is a discussion about the reliability of the OpenMV implementation for MavLink Here

if you are planning to build a NN for the the tracking part you should consider options like the Jetson Nano as it only costs 99$ and has the power for real time object detection.

The link you added talks about the BlueRobotics ping sensor …

Yes the new Nvidia Nano seems promising (mine should arrive next week), but the small embedded vision systems like OpenMV and JeVois would make the perfect GSOC as they are affordable and easy to implement.

I personally consider that GSOC should focus on realistic projects that can be completed and documented in the time allowed, I have see too many project not getting completed because it was ‘‘too much’’ for so little time and resources, that is frustrating both for the student and for the community as it does not add to the ArduPilot ecosystem.

Thanks for the heads up about the link … i edited it.

well, after a couple months with the codebase and the community i totally agree with you, the amount of work i have seen done to the GSoC Projects in the past years was amazing but unfortunately only few was merged and i think the reason for that is the amount of complexity surrounding the codebase … Understanding what you are doing and why has become quite the tough challenge for simple students like me ^ ^

1 Like

Thanks for the notice @MohamedAliRashad. I did a quick skimming and it seems the current problem with the AP_OpticalFlow PR and the script example in OpenMV is about data handling rather than detection speed. The example scripts supplies flow messages conditionally when it’s above certain confidence threshold, while ArduPilot expects constant message streams for its EKF. That being said, throwing up unfiltered messages results in arbitrary spikes. I haven’t read on the optical flow handling from other hardware such as PX4Flow though. Since I would rather take GUIDED mode approach, I don’t see any problems yet as it doesn’t directly affect EKF.

As @ppoirier said Nvidia Jetson Nano, while could be considered low cost, will add too much complications in the work than the end results I’m afraid. I did some works with their bigger, older SBCs and still being a pain for even setting things up without APSync, I can’t be exactly confident with Jetson Nano being a recently released hardware…