I wanted to share a fast autotuning method for quadcopters which I developed and tested which you can see here (https://www.youtube.com/watch?v=RSe35TjjBPI). In that video I purposely started with a badly tuned “noisy” controller. I then inject a +/-20 sinusoidal excitation @ 2 hz in top of my roll command (something like roll_sp=roll_cmd+20sin(pitime)) which allows me to move around freely (roll_cmd) whilst imposing a “frequency” into the quadcopter that allows me to “learn” the quadcopter dynamics. You can see after 6-7 of learning I get quite a good improvement, and in general good performance enough to perform an automated (perhaps not the best) roll loop which I programmed, and for me, I feel quite confident with the result of this.
The method is based on “learning/adaptation” methods where a model is fit with input/output data, and then a controller is designed for that model. In this case, I used Recursive Least Squares (RLS) to match a first order+integrator differential equation. II then use optimal control theory to design a control law for that “learnt” model, which sometimes is also called “adaptation”. In simple terms, the controller can adapt to the actual dynamics of your vehicle. For example, if you added some mass to the vehicle or something that made its dynamic response change, the control strategy will adapt to your new dynamics and automatically adjust its gains.
I attached a picture showing some of the performance measures and flight data. The first graph is just input output data of the video from the rolling virtual moment L (some people call this “mixing”), to the roll rate. The second graphs shows how the parameters of the differential equation evolved as the system moved, where you can see they converged rather quickly, and I also show the moment when I updated the gains.
This methodology was presented in UK Automatic Control Council International Control Conference (UKACC) 2018 under the name " Laguerre-based Adaptive MPC for Attitude Stabilization of Quadcopter".
Hi Oscar. Thanks for posting this. It is certainly a very impressive result.
Your demonstration shows tuning in one axis, presumably with the remaining axis (pitch and yaw) already being in tune. Will this method cope with multiple axis being ‘out of tune’?
For example, if pitch were out of tune as well, how much will that affect the outcome of the learing phase?
Thank you for your interest. Yes, the pitch and yaw axis were previously tuned in that video using the same method.
I have tested multiple axis “excited separately” and the method works. What that mean is that, as long as your “main” movement is done in a single direction, avoiding for example pitching and rolling simultaneously, it will be ok. Theoretically speaking at least, the model that I learn is quite isolated. I focus entirely on the lowest “moment-to-rate” dynamics, so the model is valid as long as the cross coupled nonlinear terms are close to zero. See from (http://eprints.whiterose.ac.uk/133221/1/LaguerreMPC.pdf)
In my tests it gave me good performance when applying this into each axis sequentially starting from all of them out of tune, but as you say, it might improve further if you come back to tuning them after a quick initial tuning. So you would do something like “tuning roll->pitch->yaw->roll->pitch->yaw…”, etc.
The main advantage that I see in this method is that it gives you freedom to do it whilst flying around as you please, without having to set the rotor to altitude hold or something like this. You can activate the learning whilst doing acrobatics and all kind of maneuvers, and the method would be learning from it, as long as you are “persistently exciting it”.
Let me see if I can get another video with all axis pitch/roll/yaw out of tune.
Yes, it has some protection features. One of them being that you need to be moving which you can detect simply by analyzing the history of absolute or RMS values of both your rates and your inputs. But the most important part is to make sure not to “update” your control laws unless you have some “certainty” in your parameters, which you can only achieve by persistently exciting the system, and also, you need to make sure they are inside some bounds which you can also define according to for example, physical characteristics like range of inertias and stuff like that, or… more experimentally, according to the range of gains that you want to consider for your control laws.