Video Feed from UAV using a jetson nano

I am currently working on a military perimeter surveillance UAV and I am confused how to transmit video from the camera on my UAV to the GCS but not on QGC. Actually, we are using computational offloading i.e we are not going to process all the data on the UAV but process half the data on the jetson and half the data on the computer on which video feed is going to be streamed along with all the other data. SO, now I have to transmit the video from the camera on the UAV to the ML Model and then the processed video to QGC. If you guys have any idea how to do this, please helpp.

I would start by doing an RTSP stream and a Gstreamer pipeline… Unfortunately, I can’t find any good tutorial online… maybe this one?

There are many information missing on your setup:

  • What type of camera used
  • How do you capture and process image on your Neural Network , is it Python Based OpenCV ?

Generally it is good practice to source from Gstreamer and Tee-Off to the Onboard processing and on a GCS compatible stream

Some good examples here:

so basically it is a military surveillance UAV and it has two cameras an intel realsense depth one for obstacle avoidance and a siyi A8 MIni for the video feed that is to be transmitted which is the video used for intruder detection and surveillance and all that stuff. Now obstacle avoidance will be done in real time on the onboard computer (Jetson Nano) but he intruder detection part is to be done computationally offloaded on the PC that is basically the GCS. We want to first transmit the video to some dronekit thing or something (idk what i should do abt this) to process it for the intruder detection part and then send the processed video with the intruder data to QGC or mission planner as live video feed and the commands depending upon the intruder type to the UAV. Now the question is how do i transmit and receive video feed like should I use a wifi transmitter reciever or a radio thing can be used if i want a range of abt 1-1.5 km and secondly how do I send the video feed first to the ML model and then to QGC or mission planner? I hope it makes sense :slight_smile:

As for Object Avoidance, you can refer to blogs and experiments from @rishabsingh3003.
I think your configuration could feed 2 different streams for complementary purposes:
A) Reaksense to Nano for Avoidance == no transmission requirement

B) Camera Capture to RF to Base station for AI detection == you coud take the HD video and connect it directly to a FPV transmitter like the HereLink

Actually this configuration is shown in the picture of the product

Were you able to solve this problem? cause I plan on using Siyi A8 and Jetson nano for object detection on my upcoming UAV project.