Realsense D435i pose

Hello all! I’m working on building an autonomous drone with a SpeedyBee v4 stack running the following:

https://firmware.ardupilot.org/Copter/stable-4.4.4/speedybeef4v3/

I have a Pi Zero 2 connected over UART (Serial 2) configured for Mavlink commands.

I plan to use an Intel 435i and an RPLidar, but no GPS (because it’s intended to fly only indoors).

I know there are a bunch of documents outlining how to use pose off of a T265. However, that’s cost-prohibitive for me and my college design team. Any chance there’s still a way to use this sensor set effectively, either though VISION_SPEED_ESTIMATE or OPTICAL_FLOW mavlink messages?

@Leonardthall directed me toward @rishabsingh3003 . If you have ideas I’m open to them.

1 Like

Something I should also note is that our 435i can rotate on the same axis as the drone pitch, to look forward and down. Our main use case is to keep it facing forward. We also have a great optical flow demo, which can also use the depth sensor to get a sense of depth change as well as 2D optical flow. I don’t know if we could use OpenCV to solve this as a PnP, but I figure I’d drop this here for context.

Off the top of my head (it’s not really my domain but I’ve worked with the creator of RTAB-Map on various projects), the 435i is only a depth camera and won’t publish pose messages. Your onboard computer (Rpi) will have have to perform VO (visual odometry), VIO (visual-inertial odometry) or SLAM. Then your computer can relay the pose information to Ardupilot.

We used to work with a ROS framework, and using the MAVROS library, you can get Mavlink communication between Ardupilot and ROS.

You’ll have to choose a VO, VIO or SLAM algorithm that is light enough to run on a Raspberry Pi. You can find various articles discussing accuracy and latency of various algorithms on various hardware (Rpi included). You’ll have to take into account that the camera moves relative to your flight controller, by applying a transform (translation+rotation). Either you can measure the servo angle with something like a potentiometer, or by taking the difference between Ardupilot’s estimate attitude and the camera’s estimated attitude, since it has an integrated IMU.

Sidenote: I’m not sure if VO and VIO are the right terms if it’s an algorithm aided by depth. There might be another name for depth odometry.

What if I shifted my implementation to use a downward facing optical flow sensor with a range finder? I’ve seen some inexpensive ones online. Could I avoid computing pose with SLAM and use that instead to have the drone hold the position?

With a optical flow + range finder setup, you can definitively do indoor loiter/position hold, all within Ardupilot, without requiring external computation. It will fly manually very well and hold its position decently. Issues arise when doing automated flights. The optical flow + rangefinder setup doesn’t provide Ardupilot with a position, but only with a velocity. Ardupilot integrates this velocity to obtain a position. Therefore, it is likely to drift over time. For example, when telling your drone to move forward 5 meters, the drone might move 4.8 meters. And this drift will accumulate through the flight.

If any dev can correct me on the above statement, please do.

The Ark Flow is probably the best optical flow setup for RC, and here is an example of its performance
:https://www.youtube.com/watch?v=SAbRe1fi7bU . So I might be wrong when saying it accumulates a drift, because it is holding its position pretty well.

VO and VIO also have a drift issue, and newer algorithms help reduce that issue. The only way to perform true position flight indoors (without external equipment like Vicon or UWB) is with SLAM. With SLAM, the algorithm maps out the environment, and then places the drone in that map, in real-time. But SLAM is more CPU expensive.