Please consider reading Part - 1 of this series before reading this blog post for a greater understanding.
In this blog, I will introduce my repository, which contains a mixture of scripts that integrate Depth Cameras + AI mounted on a Copter without a Companion Computer (all in less than 100 grams)! To my knowledge, this is currently only possible with a special camera like the OAK-D-IOT.
The link to the repository: GitHub - rishabsingh3003/ardupilot_depthai_scripts: Contains examples for DepthAI integration with ArduPilot
Please note that this is an ongoing effort, and my scripts are not yet finalised, both safety and documentation wise (Consider them Beta version). I would like to issue an advisory to anyone who wants to test them on an actual vehicle, please maintain safety and distance while trying this out. Also, all these examples require Copter 4.1 and above and will not work otherwise. I would appreciate any feedback!
All the examples that I discuss here DO NOT require any companion computer and work directly by connecting the camera to the Flight Controller. More precisely, it is the serial port of the ESP32 that is connected to the Flight Controller. They communicate via a MAVLink setup.
I will discuss three very interesting applications that the sensor can be used in. These scripts can act as a good starting point for many more applications you might want to build!
I trained a custom MobileNetV2 model to detect this target:
You can theoretically train any number of such targets. In my experiments, I found that the relatively small marker could be detected at 10+ meters. This range can be further significantly improved by either increasing the size of the marker or training the model a little better. The future firmware of the OAK-D’s will probably also have the ability to detect AruCo markers (or similar markers), which can be additionally used for more reliability. The setup and instructions on how to use this are given here:
This is the exact same work that I did for the RealSense cameras a few months back. Except now it doesn’t need a (relatively) heavy companion computer to work. The OAK-D does all the work! This example does not use any of the AI powers of the camera but simply divides the stereo depth view into a 3x3 grid. The closest distance in each grid is sent to ArduPilot using the mavlink message OBSTACLE_DISTANCE_3D. The example script can be found here:
A similar demo to what this script does (except the human tracking bit, which I will add in a later iteration):
This is a slightly dangerous example that I initially thought of not sharing (A wrong detection will get your copter flying towards a random location!). But I’ll share it here as it’s an excellent example of what this camera is capable of. There are plenty of human-detecting pre-trained models out there. Any one of them can be used here to follow a human around using an ArduPilot enabled vehicle. Please note that I haven’t added yaw control yet, so you must do that manually.
Example script (and instructions) can be found here:
I will be making a tutorial video and improving the scripts in the future. I am working on making the integration more effortless, so setup time is hopefully less than 5 minutes. I hope to add more examples, too (Both for OAK-D and OAK-D-IOT). I would love it if someone from the community contributed to this repository as well!
Feel free to contact me if you want to discuss ideas or report bugs. Thanks for reading my blog!