Google Summer of Code 2025 Proposal

Project Title: AI Chat WebTool for use with MP and/or QGC
Organization: ArduPilot
Applicant: Ervisa Koca
GitHub: ErvisaS (Ervisa Sulmina) · GitHub
Email: erwisasulmina@gmail.com
Time Zone: GMT+3 (Istanbul, Türkiye)


About Me

Hello! I’m Ervisa Koca, a computer engineer and software developer based in Istanbul. I’ve worked on a range of AI and machine learning projects—from CNNs for traffic sign classification to building fuzzy logic systems for evaluating water quality. I’ve also explored natural language processing with LSTM models to convert plain English into SQL queries.

Currently, I work as a freelance developer focused on Microsoft Dynamics 365 and automation tools using Power Platform. I’ve also interned at Uyumsoft and Project Flux, gaining hands-on experience with Python, C#, SQL, and backend systems. I enjoy building systems that make complex technology feel simple, and I see that same spirit in this project.


Why GSoC and ArduPilot?

I’ve been following GSoC for years and always admired how it connects open-source communities with young developers. Now that I’ve graduated, I want to use my time to contribute to a real-world project that pushes the boundaries of what’s possible with automation and AI. GSoC is the perfect space to grow, be mentored, and give back.

ArduPilot stood out to me because it blends robotics, aviation, and open-source culture. What makes it even more exciting is how open and beginner-friendly it is while still being deeply technical. This project—building an AI assistant to simplify drone operations—is a perfect example of using modern tools to make advanced tech more accessible. That’s exactly the kind of work I want to be doing.


About the Project

This project aims to build a web-based AI chatbot that allows pilots to control their ArduPilot drone using natural language—either spoken or written. The goal is to translate user commands like “take off and fly 10 meters north” into MAVLink commands that get sent through tools like Mission Planner or QGroundControl.

The assistant should be able to respond to basic questions, arm the vehicle, change flight modes, trigger a landing, and carry out short-distance flight maneuvers. This makes it easier for both new users and experienced pilots to operate their drones more intuitively. It also opens doors for voice-driven or hands-free flight control in special cases.


My Approach

First, I will study how the original MAVProxy AI chat system worked and how commands are sent using MAVLink. I’ll also explore how current WebTools in the ArduPilot ecosystem are structured and connected to SITL simulations.

Then, I’ll begin building a simple web interface for the chatbot using HTML, JavaScript, and potentially a light React framework. This interface will connect to an AI service like OpenAI’s GPT or Google Gemini, where it will process the user’s input and return an intent.

From there, I will map the intent to one or more MAVLink commands—such as arming the drone, switching modes, or adjusting altitude—and send those through the appropriate channel to the vehicle, either simulated in SITL or later in real hardware.

All of this will be tested thoroughly in the SITL environment. If time permits, I’d like to explore adding voice input using browser APIs to make the assistant even more interactive.


Technologies I Plan to Use

For the frontend, I’ll use JavaScript, Bootstrap, and optionally React to create a clean and responsive UI. The backend will be built in either Flask (Python) or Node.js to serve API calls and manage interaction with the AI API. For communication with the drone, I’ll use MAVLink libraries and WebSocket or serial interfaces, depending on the connection method. For testing, ArduPilot’s SITL will be the core simulation tool. I’ve already started exploring it and feel confident in building against it.


Planned Timeline

Before Coding Begins (Community Bonding period), I will set up the SITL environment, explore how MAVLink commands are issued from web tools, and finalize the architecture for the assistant. I’ll use this time to discuss details with my mentor and ensure the design is realistic and modular.

In the first coding phase (June 17 – July 15), I’ll focus on building the chatbot interface and integrating it with OpenAI or Gemini. I’ll also create the intent-to-MAVLink mapping logic, starting with a few core actions like takeoff, landing, and switching modes.

In the second phase (July 16 – August 15), I’ll expand the assistant’s vocabulary and test it against various edge cases using SITL. I’ll also begin adding basic voice support and provide feedback messages to users based on the drone’s state.

In the final phase (August 16 – September 16), I’ll polish the UI, write comprehensive documentation, fix any remaining bugs, and prepare for final evaluation. I will also submit a usage demo video and help onboard future contributors if they want to improve the tool further.


After GSoC

I would love to continue working with ArduPilot beyond GSoC. This AI assistant can be expanded to support mission planning, sensor configuration, and education scenarios. I’d also be happy to contribute to related projects like integrating the chatbot across other ArduPilot web tools or simplifying the UX of setup/configuration tools.


Availability

I am fully available to work 35–40 hours per week during the GSoC timeline. Since I’ve already graduated, I don’t have any conflicting academic obligations. I also work as a freelancer, so I have full control over my schedule and can prioritize this project throughout the summer.

1 Like