[EDIT] I was not aware that porting Ardupilot to ChibiOS and HAL_ChibiOS had already been completed last year when I wrote the first post, so most of the initial discussion is irrelevant now. Go directly to the #10 post for a revised proposal which take into account the fact that Ardupilot is already based on a portable OS with hardware abstraction.
[EDIT bis] I don’t think this thread really belongs in the “Rants&Raves” forum but there is no “Architecture” or “Long-Term Evolution” forum. If a moderator think that this thread belongs in another forum, feel free to move it, I only used the “Rants&Raves” forum because I didn’t want to pollute a forum dedicated to other topics.
Hi,
It’s neither a rant or a rave but I didn’t want to pollute specific forums with this subject:
Ardupilot has, so far, been a single MPU architecture, requiring a relatively high-end MPU for the fast control loops (make your drone stable) while handling at the same time the slow loops (remote control input, OSD, route planning, etc.). This approach makes Ardupilot hardware more expensive than necessary considering its limitations (no computer vision, or AI flight control optimization, or anything you could do on a linux/FreeRTOS platform).
I was planning on developing an F3/Orange Pi zero flight controller (same price or lower than an Pix4) with Lora transceiver to solve the receiver/vtx interference problem and provide variable video bitrate based on RSSI, and allowing more capabilities for the low rate control loops in the future, like follow-me mode, subject-centered pan, OpenCV based optical flow control for the gimbal/drone, etc. I know its feasible with a Pix4 plus and something-Pi but it costs twice as much as necessary, and require extensive modification of the Ardupilot code to delegate all the slow control loops to a separate computer.
Of course, I could develop it all by myself, it would only take 3 years to become proficient enough in C/C++ (I’m more a XXIst century language guy, pyhton, go, rust), plus an additional 2-3 years to learn everything about the Ardupilot architecture and port/debug it on a MPU/Pi platform. Or, I can present a proposal and people already proficient in C/C++, who know the architecture of Ardupilot, could do it in months rather than years (Yes, years, because I also have a job and other hobbies).
So, here’s my proposal for the next generation Ardupilot hardware architecture:
- The cheapest hardware to stabilize the drone in a fast (100-500Hz) control loop (only accelerometer/gyro, with an attitude feedback to a gimbal controller, something a cheap 6DoF F3 controller can do easily);
- A Pi zero or anything which runs FreeRTOS for the slow (5-10Hz) control loops (altitude stabilization, GPS, image flow control, IP transceiver, planned route, GCS, follow-me mode or any computer image vision mode, AI based auto-tune, etc.)
The main advantage would be the functionality/cost ratio (a basic 6DoF F3 controller plus an Orange Pi zero cost less than a Pixhawk), easier developement (mixing fast 100-500Hz control loops with complex calculus 5-10Hz control loops is much easier on two MPU/CPUs linked with SPI than on a single one), and much more capabilities (a 5Hz control loop allows IP transceivers like Lora for control and video on the same frequency, basic OpenCV/AI image recognition or flow control, etc.).
So, in a nutshell, using a cheap MPU for stabilization and a cheap linux/FreeRTOS for the slow control loop would open Ardupilot to a vast array of new capabilities like professional video , high speed/resolution stereoscopy obstacle avoidance, easy IP-based long-range setup without RTX/VTX interference, etc., all for the same hardware cost as a current Pix4.
I can contribute to the architecture if necessary but, first, I think we should discuss the advantages/drawbacks of a two-platforms architecture instead of a single architecture for the future of ardupilot.
P.S.: By the way, that’s how we, and most mammals, have evolved. We have specialized brains for every tasks, so each brain is optimized for each task, and they communicate (at a very low bit rate) to coordinate their own tasks. It starts to be applied to robots too.