Teach and Learn mode a new approach for Ardupilot

I think it would be useful a new mode where the Autopilot should:

  • detect the radio inputs from user
  • save them
  • replicate later the same inputs during a Mission context

The reason for that is because planes and copters can achieve complicated tasks when drived by human pilot that actually are impossibile to execute with the Autopilot without very high programming skills.

Lets consider a Himmelman turn .

An average human pilot could do it easely but quite complicated with Arduplane.

Would it be useful ?
I think yes and a lot since it could solve complicated problems of people who has not programming skills.
Let think about the problem of bird attack during a Mission, actually there are no escape strategies to disorient the bird and create a code that could fit to all planes from litttle to large one would be complicated and finanlly a waste of time, while with “teach and learn” mode it could be done easely.

I think that this new approach could be a big step for Ardupilot allowing a user customization of tasks.

What you are suggesting would be very cool, but may be harder than you anticipate. I’d be glad to elaborate if you’d like.

On a related note, this idea was the subject of a very famous machine learning paper about learning helicopter maneuvers. Here’s a very cool overview video: https://www.youtube.com/watch?v=M-QUkgk3HyE
And the link: http://www.andrewng.org/portfolio/apprenticeship-learning-for-helicopter-control/

I think the best analogous to this is the current implementation of the “SmartRTL” function which records “points”. Except in this case, you’d want it to record inputs or movements. One issue I think you’ll run into, which is discussed elsewhere on SmartRTL as well, is how much space to you have for those actions to be saved? Eahc point takes up space, so they limit how many points are saved (3hz if distance from last point is 10meters), and it does “pruning” to clean up points which are too close together.

With your actions being saved in realtime (300hz? for the PixHawk right?) which might be the level of accuracy you’re looking for when trying to “record” a Himmelman turn. That’s a lot of data points! And you’re not saving static Lat/Long/Alt values, you’re saving Accelerations and Velocities (or RCIN/RCOUTs which would NOT be reliable), and (i assume) those are double precision floats. That’s a lot of data!

With enough space and processing power anything is possible of course! IF RCIN/RCOUT can be reliable enough, then perhaps scripting on the transmitter?

Nice idea, a ‘learning’ smartshots could possible be used to implement this.

1 Like

A friend of mine told me that some old Futaba had a function to record sticks inputs and then replicate them.

A possible solution could be an external micro where PWM outputs previously recorded are stored and send a PPM signal upon a radio switch to deceive the Autopilot .
It could be easily triggered by a Mission Planner output command and would have the advantage to be an external solution that could quickly verify if the concept works.

If it worked for Futaba years ago , allowing 3D figures with plane , I think that just the RCIN data should be enough .

Given same weather conditions, battery state, and motor efficiency, I would agree.

Will ask a friend if he can help me to set up the external micro

I believe that the native Lua Scripting “future” feature that ArduPilot has in development might be handy to achieve this kind of output.

that’s kinda what i was thinking too. Lua is powerful and can even provide event driven scripting, versus just record and playback.

Now an extremely useful function in an automated test tool I use to use for avionics testing, was that i could execute a manual test while recording, and have Lua script spit out at the end. the script couldn’t be directly run, but it saved hand-jamming in a bunch of synchronous steps

What I like about this idea is that you could exchange “manoeuvers” with someone else who needs to learn and could watch what the right moves are supposed to be.