Setup for Basic Onboard Vision + Person Detection During Preset Mission

Hi everyone,

I’m looking to start experimenting with simple onboard vision tasks — specifically, detecting a person during a predefined autonomous mission, and sending a warning and GPS coordinates back to the ground control station (QGC or similar).

My minimum use case is:

  • Fly an autonomous mission using Pixhawk
  • Detect a person using an onboard model (e.g., YOLO)
  • Send back GPS + message to GCS (via MAVLink or other suitable method)

Eventually, I’d like to expand this to include live video streaming, but for now, I’m suffering a bit from option paralysis. Between companion computers, smart radios, MAVLink routing, digital video systems (like Herelink, SIYI, etc.), and various levels of onboard vs. ground processing, I’m unsure where to start.

My questions:

  • What’s a minimal yet upgradeable setup you’d recommend to start testing this?
  • Should I begin with a Raspberry Pi + USB camera, or invest early in something like a SIYI MK15 or Herelink system?
  • Is it practical to do vision processing onboard with a Pi, or should I offload video and do detection on the ground at this stage?

Any guidance or setups that have worked well for you would be super appreciated.

Thanks!

Emanuel

If you ever want to scale edge (onboard) computing is the way to go.

People have been doing live computer vision on RPis for a long time.

IMHO Herelink has rather poor ergonomics and its documentation regarding advanced network configuration is rather lacking.

If you don’t have experience with multirotors and Ardupilot I would recommend you start with something smaller to learn the basics.

1 Like