Is there a long range stereo camera that can be used for altitudes up to 200 meters?

Hello everyone,
If want to do autonomous flights in varying altitudes like from 2 meters up to 200 meters, what stereo camera/sensor should I be using?
Is this even possible? or people use other means for such altitudes? if so what do they use that works for both low and high altitudes alike? other than GPS of course.
What happens if I for example go and select a ZED 2 Stereo camera that handles only up to 20 meters? and use it for higher altitudes? Whats the issue that prevents people from doing that?

I’m a complete newbie in all of this and would greatly appreciate your kind help in this.
Thanks a lot in advance

The ZED 2 cam uses computer vision with neural networks to detect objects and estimate their distance from the camera and respective trajectories. If you are high up in the air at 200m, and with that large FOV lens, the cam won’t see anything other than just small dots… so I suspect that’s the reason it’s rated for altitudes up to 20m, not because of some inherent hardware limitation related to altitude.

To answer your original question – it depends on what you want to accomplish. What do you want the camera to do / help with?

1 Like

Does it have to be stereo?

@seattleiteFPV , @Mustafa_Gokce
I want to do autonomous navigation. my initial thought was to use VIO with a monocular camera, but after couple of searching it turns out, VIO especially with monocular cameras are not robust and are usually used in confined spaces like indoors, when we go outdoors, and especially when our drone should fly at varying altitudes (based on how high the obstacles are or what the mission is) its not a good idea.
That’s why I searched the commercial drones and noticed they all use stereo cameras. so I thought if they are using stereo cameras and they are flying around 20meters, maybe if I find a camera that works at higher altitude my problem goes away!
Then I digged more and it seems, if I have access to depth information, i can also use that for autonomous navigation. so to get depth I need stereo camera right? thats why.
That’s pretty much the gist of it.

Thats an interesting problem and traditionally it has been implemented using Terrain Matching techniques like TERCOM used in cruise missiles.

I did some lab experiments using google maps printed mosaic to be matched with a larger map preloaded in CC using openCV scripting with a monocular camera. I had some success but I have not pushed for live tests.

Here is an interesting paper on new developments from caltech


Wow! Thanks a lot. I’ll have a look at it. I have never seen or heard about TERCOM anywhere in my searches before.
During my searches, when I was reading about why we cant use IMUs, etc for navigation I stumbled upon INS (Inertial Navigation systems) and noticed all of today’s aircrafts, both military and civilians, plus submarines and nearly everything else, they all have INS and as the name suggests they do use IMUs and the likes.
So whats the difference, I noticed their IMUs are extremely accurate and extremely expensive, like 100K dollars expensive!(I could finally understand why they cost so much!) I believe I have read missiles/spacecrafts also use those. So this is all news to me and I’m really thankful to you for sharing this.

On a side node, I just want to share what I have found. I found out the ZED 2 Stereo camera is (obviously) an RGB stereo and for it to work properly it requires feature rich environment!
Quickly after that I found out that we also have IR stereo cameras which do not need feature rich environments and work everywhere. so that was intresting. now I guess I need to also be searching for long range IR stereo cameras and not simple RGB stereo cameras!
I’d appreciate if anyone knows anything in that regard as well. (I also welcome anyother solutions that you deem alright for my case. )

Thank you all a gazillion times. really appreciate your kind helps

I appreciate helping those who are eager to discover :wink:

Stereoscopic might be used if we provide a wide baseline for the cameras , there is a nice project here:

This option could be tested as the product is available here:

1 Like

Ohh… btw that could qualify for a 2022 Google Summer of Code project :wink:

1 Like

Spot on suggestions above. @Rika , I suspect you will find that at 200m above ground a stereo camera won’t help deduce depth info very well, if at all. You either need to be at closer range to your subjects, or you need a camera with a very wide intra-axial distance with the inherent ability to exaggerate dimensionality; imagine for example a drone with a stereo camera that has its two lenses several meters apart, instead of the standard 6cm.

If depth perception at higher altitudes is what you are after, LIDARs might be better suited for the task, though there is a limit on altitude for LIDARs as well due to increased point sparseness as the distance to subjects increases. There’s also a paper out there from Cornell that proposes using 3 cameras with narrow FOV (telephoto lenses) to “zoom in” on the terrain from multiple angles, effectively emulating telephoto stereoscopic vision. Depth Sensing Beyond LiDAR Range (

Good luck.

1 Like

Thanks a lot really appreciate it. I’ll look into it :slight_smile:

Thanks a lot. really appreciate your thorough explanation.
I though since we can have metric measurement using a stereo camera, it shouldn’t be that difficult to predict the height and by extension, come-up with a depthmap, I mean like I use a better deep model(like what stereolab did for their z2 stereo cameras) and I get myself a depthmap. so it seems its not gonna work that way right?

Thanks for the link as well, I had quick look at it, it seems two cameras are installed at the front of a car and one behind it, I don’t think that combination is possible on a drone (two camera facing down, and one camera facing up?! cause the front of the drone and back of it , doesn’t seem to have lots of meaningful pictures, unless I install them in a way that the two cameras be facing like this

instead of :


because in a car, the back camera sees something very different from the front (although I have no idea at this moment if all this that I’m saying makes sense or not. I probably should give this a test and see if this truly works (fingers crossed).

I just watched that and after @seattleiteFPV 's notes concerning wide intra-axial distance, I guess I have an idea :slight_smile: it was really cool.
I also noticed this link which links to I guess we can do something about the height. like use a variable axial distance (for different heights,larger for heigher altitudes) and then use a good algorithm for depth estimation.

@ppoirier @seattleiteFPV
Here’s one question though, how much does depth accuracy affect our navigation?, I mean, for navigating from point A to point B, what happens if we get some pixels wrong?
Would that mess up the whole navigation? like confuses the quad-rotor about its current location? (in order not to bump into any obstacles we can use a laser rangefinder for the height, so that can be covered, but what about the actual navigation? How much leeway can I expect?
Thanks a lot in advance

Here’s one question though, how much does depth accuracy affect our navigation?, I mean, for navigating from point A to point B, what happens if we get some pixels wrong?

@Rika your question suggests that you might be imagining an application where you have absolutely no GPS. Is that the case?

A simple craft with a GPS + compass + barometer + IMU is more than capable of extensive autonomous navigation. I just finished building one a couple of months ago that is fully scriptable in python, so I can do arbitrarily crazy autonomous stuff. The drone “knows where it is” as a result of ardupilot’s EKF computation units that take all of that sensor data into account, including GPS (for geolocation) and baro (for altitude). Strictly speaking, you don’t need a camera to navigate autonomously from point A to point B.

That said, for things like collision detection or path planning in an environment with obstacles, you will need additional sensors, and stereo cams might fit the bill. Are you envisioning a companion computer flying onboard the drone and doing the computational heavy-lifting involved in recognizing objects and proposing possible flight paths? Or is that something that you want the ground control station to do? Both of these routes are feasible, and they will result in very different copter designs.

1 Like

Thanks a lot. Yes, for the places where GPS is available and stable it will be used, but the problem is GPS is not always available or even when it is, stable enough. so I’m looking for something that can be used regardless of GPS being available or not.
Yes I’m planning on using a companion computer (although being able to do the calculation on a ground station and then sending the data to the quad-copter would be a neat idea as well, I dont know anything about wireless connectivity or how secure the transmission is, I’m planning to use ROS for this by the way) for now for keeping things simple, and don’t drown myself in an ocean of technicality, I plan to use a companion computer to do the processing required.

Something just struck me at the moment, concerning the intra-axial distance, would using higher resolution camera, compensate for the shorter intra-axial distance? I mean if we use a much higher resolution sensor, would it not resolve the intra-axial distance constraint?
This just hit me, when I noticed we have different stereo cameras which seems to have the same intra-axial distance, yet some work at much higher altitude compared to others.

That is a good question. I suspect intuitively (but I don’t know for sure) that higher resolution would allow for better or finer-grained distance measurement at higher altitudes or from farther away. There’s a cost factor here as well, since high-resolution sensors are definitely more expensive than lower-resolution ones. :slight_smile:
At the end of the day, whatever solution you employ will be a function F(feasibility, budget, time).