Hey everyone! For this years Synopsys 2025 project my team and I are creating a ardupilot based UAV swarm for disaster relief and survival location! All progress will be posted here as well as on our campaign page, which will be linked shortly.
Background:
Disasters, whether natural or man-made, can have devastating impacts on communities, resulting in loss of life, injuries, and widespread damage. In the aftermath of such events, rapid response and effective search and rescue operations are crucial to saving lives and providing necessary relief to affected individuals. Traditional methods of locating survivors often involve extensive ground searches, which can be time-consuming and inefficient, particularly in large or difficult-to-access areas.
In recent years, advancements in unmanned aerial vehicle (UAV) technology have opened up new possibilities for enhancing disaster response efforts. UAVs equipped with advanced cameras, sensors, ML-systems, real-time communication capabilities, and autonomous navigation systems can significantly improve the speed and accuracy of locating individuals in distress. By leveraging aerial views, these UAVs can cover vast areas quickly, identifying survivors, assessing damage, and guiding emergency response teams to critical locations.
However, the effective deployment of UAVs in disaster scenarios requires careful consideration of various factors, including environmental challenges, regulatory compliance, and integration with existing emergency response frameworks. As such, there is a pressing need for a dedicated project focused on creating a UAV specifically designed to address these challenges.
Purpose:
This project is designed to streamline the process of finding individuals in a disaster-stricken area. This will allow responders to administer care faster, and the drone will be able to administer a small amount. Our project will be incorporating Machine Learning, in the forms of audio and infrared object detection. Incorporating more advanced forms of technology will help to save more lives, and will raise awareness on how technology can be used in ways to save lives.
Our drone uses an infrared camera to detect humans at night, allowing detection even if it is pitch black. On top of this, our project innovates by using machine learning on the camera feed, specifically object detection. The feed is sent back to the ground station computer running the model, allowing for fast and accurate detection. Our project also utilizes machine learning in the form of audio. Audio feed will be transmitted over radio to the computer, running another model in tandem that is able to detect human voices. This allows for an even tighter sweep for signs of humans. The project runs using GPS location to be able to accurately pinpoint the location of any humans. Our project will incorporate a bracelet that sends GPS signals for drones to help locate and find.
Linked below is the final research plan. Please like this topic if you would like to track our progress along this fun journey! Final of SY25-Science Fair.pdf (590.1 KB)
This is just a possible diagram of connections, only the ELRS module will be used.
Remote ID Module is separate with it’s own module .
We will use cosmostreamer, and we need DJI air unit as the RunCam Night Eagle HD can only connnect to that.
Digital camera is connected to the VTX
Audio module is built into the analog VTX
Yes, you’re probably right.
Fly over and the camera might be gimbaled
Possible multiple signal extenders around the area
Yes, a python script running the ML models will have control over the swarm, we will test by using one proof of concept drone and simulating the rest with dronekit SITL
Well, I believe that the PDF is the very first draft of the idea. That’s good for us to discuss on the topic based on detailed text description.
[OK] ELRS is good for RC control. And it’ll support mavlink also, simple to switch.
[OK] Remote ID is connected to APM through UART or CAN bus.
[TBD] cosmostreamer sdk? I don’t know much about it. Anyway, if we gona use it, then we need to clarify hardware/software deployment and check if there is any intergration need for extended function, such as RTP streaming to ML to do the further analysis.
[TBD] I’m SURE it’s NOT possible for your project, there are just prop noise with built in microphone on VTX, you can check my video.
[TBD] It’s a quite big project for intergration. Quite a lot of resources and tests. I think a whole digram for the team is easy to cooperate.
[OK] Yes, that’s it. You will need gimble, choose a hardware and connected to APM. When a survivor is found, we have to mark a POI and fly over the survivor and let him/her know that we have found him/her. And take a closer look to exam the environment and detaled route of the area and send this short video back to the recure team. I we have the map then might give possible recure route for the recure team to consider.
[TBD] I think #5 should be consider first.
[OK]
[TBD] I think this can be done by backend/servers, not FC/probably not companion computer (If you have multiple rallypoints then it might be possible). That depens on the system diagram of the deployment.
So I think it might be better to divide sweeping or rekon mission to several parts:
Sweeping the disaster area, pickup survivor signals: such as thermal/flash light/sign of SOS etc.
Recon the POI area with low altitude, picking up detailed info, verify the objective through sensor data and algorithm
Auto planning of recure mission and generate executive checklist for the mission
Alarm the admin to dispute the task for different recure personel and give a live feed of the objective and surrounding area
Further more:
good/miss hit POI should be furter ananlysis and iteration for AI algorithm to update those layer parameter or AI model
battery estimation and charging route should be considered
essential/basci recure kit should be arranged by drone, such as bottle of water, medical care kit etc
Well, I’m NOT take a lot of time thinking clearly. But it’s a real good project if you have funds to do it.
Yeah, for number 5 machine learning models have ways to segment out this drone noise, and if it’s too much we can consider using different types of propellers. Right now we’re thinking 2 bladed props, as those will produce less noise and rushing air rather than 3 bladed props.
For number 4 Cosmostreamer is a firmware that runs on a Raspberry Pi. The PI connects to the DJI goggles, and from there video can be wirelessly transmitted to a machine.
For 10, the python script is standalone, running with dronekit sdk and the machine learning models. It should be able to augment the route being connected to multiple drones with MAVProxy. Augmentation wouldn’t be so resource intensive as it’s just a matter of coordinate math.
Yes, the essential kit is an idea we have if we have enough time to complete it. Since the project is hard due by march 14, we might not have enough time.
I’m NOT sure if fly high above the ground, quick sweeping the area, SOS voice lound enough for picking up even it’s filtered. Well it depends on test data.
Yes, I saw this on youtube video. around hundreds of millisecons. I think it’s OK for this project.
Sorry for trouble, if we use wfb-ng, we can setup multiple wifi-adapters (antennas) , then the video is just steaming to the PC(much powerful), haha, I like it.
Yeah, it needs quite a lot of resources. But did you think about divding the project to 2 or 3 phases to complete different functions.
such as: (well, it’s just a … you know)
Phase A: System up; sweeping the area and pickup the signal of survivor.
Phase B: bla…bla…
Hmm, yeah. Right now it’s just a tentative reserach plan so it will be split up later on. For the audio I see what you mean by that, we can probably repurpose it IF it doesn’t work so that the audio is to communicate with people in the disaster area, (how many, any injuries) while the drone isn’t armed.
I think there are quite a few ready-to-go systems already.
And the only difference for the project is algorithm for object detection and data anlysis. So you don’t have to brother with other kind of stuf: tuning a copter, buy a gimbaled camera, debug the data link channel(opensource takes quite a lot time when it’s stucked) etc.
I’m doing something similar with a fixed wing and maybe you can get some inspiration. I’m building a Mini Talon that carries 2 cameras: 1 FPV camera and a gimbaled one with zoom. All is made to be as cheap as possible while mantaining a level of performance. Here is a quick resume of the interesting parts:
Gimbal: Storm32, self designed 3D printed, retractable, able to do continious 360º rotation.
Gimbal Camera: Cheap chinese x30 zoom IP camera from a security camera.
FPV Camera: Arducam 64MP Hawkeye (probably will change it for other one)
Videos and telemetry transmision: OpenHD.
RC: ELRS
Battery: DIY Li-ion 12000mAh
Ground base with directional antenna in a tracker.
Still in the building process but I have tested all the electric system, gimbal and the video/telemetry transmision.