Introduction
Hello, I am Aditya Omar from kanpur, India. I am delighted to share that I have been selected as a contributor in GSoC, 2024 in Ardupilot with @rmackay9 and @MichelleRos as my mentors. I am working on MAVProxy AI Chat Enhancements project this year. In this blog i will be discussing my project in detail.
Mavproxy chat module enables the users to control their AP vehicle with prompts, at the back-end it is currently using Open AI’s Assistants API, the longer term goal is to shift it to local LLMs, where vehicle can be controlled by small models running on GCS(Ground Control Stations) or Companion Computers.
There is a well defined blog by Randy on chat module’s architecture and working.
Also one can refer this video for the detailed working of chat module.
The project is divided into multiple sub-tasks:
1. Adding a cancel feature to cancel the prompts/run
This feature allows users to cancel the prompt given to Open AI’s server
2. Chat Streaming Support
This was the most awaited feature, it allows the live server sent events and live chunk of texts to the chat window.
This reduces the latency and response wait time from OpenAi’s server.
3. Audio recording enhancements
Users can also use Voice commands to send prompts but this feature is currently hard coded to only 3s, this enhancement will allow the users a better and smoother functioning for voice commands.
4. Enhancements of Prompts(Experimental):
Many a times AI assistant unable to understand the prompt correctly and show absurd behavior.
This involves the enhancements of prompts and vector data(Instructions) using prompt engineering, this majorly involves experiments on instructions and prompts to get the desired behavior of the vehicle.
5 Enabling support for local LLMs
Presently we are using OpenAI’s assistants apis but the long term goal is to shift towards locally running LLMs, At this point of time this involves many hurdles like
→ Heavy system support for experiments and training(If needed)
→ Missing reliable or single open source architectures.
The goal was to try and experiment with OLLAMA.
Thanks for reading the blog so far, I would like to receive the valuable feedback or suggestions for the module.