Hi, I was wondering if you have any plans to build on the work done on 2019 for Ardupilot & AirSim integration for the 2020 GSOC edition.
It could be really nice to come up with some framework for implementing Reinforcement Learning algorithms for robotics control of autonomous drones using OpenAI gym. I would love to work this, possibly with inputs from photo realistic images or lidars in different scenarios.
I have in mind something similar to CARLA open simulator for Autonomous drones that runs on Ardupilot firmware.
I know that transferring policies to real drones would be impossible in the short term but could be a really interesting new development line to look into for the future. With domain randomization and other techniques some policy transfer can take place even when trained in simulation. To be very concrete, the first steps could be just to set up a convincing framework and provide a working example in a predefined environment (city landscape?).
Also, I would like also some thoughts on system identification capabilities for Ardupilot based system. The closest I have seen is the auto tuning procedure, but It would be nice to build on this procedure to come up with some constraints on the dynamical capabilities of a drone to follow reference signals (Position, Velocity, Accelleration). I know that better state estimation system is required, but it may work in conjunction with Visual Inertial Odometry or April tag based localization, or even a cheap multi camera system (A cheap equivalent of a motion capture system). This would be the starting point for implementing some higher level controller (MPC) that could unlock a whole new level of performance for UAV.
Please let me know your comments on this. I hope I was clear enough, and If these Ideas are not crazy I would be happy to elaborate more on why I think they would be important and what could be done.