Thanks Andrea. Another thing I’d like to really narrow down in one video is how to capture the video and then pipe it to both the on board OpenCv and the streamer. And then stream also the modified frames from the OpenCv. That I think would be a nice step one
Would it be possible to run aruco board detection on the RPi (or different small companion computer) onboard the quad?
it is an interesting thing but is a little bit different from what I have done in my system. if I understood correctly you would like to capture, process and stream the video directly with OpenCV without using the ROS and I have no experience in this.
This thing could be done with ROS using usb_cam or gscam to capture, cv_bridge to convert ROS images into OpenCV images, and vice versa and gscam to stream using gstreamer pipelines.
Not using Raspberry Pi 0, the one I use, maybe using Raspberry Pi 3 but I have not tested.
In my system I have tried to do all by using minimal and cheap hardware, so I use my desktop PC to do more heavy processing. This anyway has the advantage to simplify the testing of other visual pose estimation package. Working on PC is way more easier and speedier then working on embedded board.
Is the RPi 0 mounted on the quadcopter.
And if it is mounted on it, would it be possible to use it fully as the receiving end with OpenCV installed on it so it can send the commands wirelessly back to the quad?