It does looks promising, I hope we will soon be able to borrow some great ideas from @Voidminded work for the hardware. I think he is exploring some of the onboard computational resources for this project too.
The video does look great. I think in that case they are using 2d occupancy map and they might have projected depth image to laser scan data which is relatively quite less expensive for fusing into the occupancy map. I also think that the LSD SLAM demo might be running on the ground station, although, if we manage to do it onboard like that then ours would also look quite awesome.
I did run some tests on Odroid today and it took ~0.4s for octomap cloud insertion on it, so I guess we can run it at 2Hz for now. I’ll also take a look into how to parallelize point cloud insertion in octomap, which should improve the performance further.
@ppoirier I think I made a blunder. While benchmarking the time I wasn’t focusing on the time taken for disparity computation since on the desktop octomap was the bottleneck. When I was working on Odroid, I ignored the lag in the disparity computation assuming that it was happening due to X forwarding but today when I was benchmarking all the functions after adding compiler optimizations to my code, I noticed that it is taking ~2.5s for disparity computation itself on Odroid while now it takes only ~0.2s for octomap generation.
If someone has a clue on improving disparity computation on any of arm based hardware please suggest it to me. I am thinking that there might be some way for opencv to utilize the onboard Mali GPU or something. Are there any other SDKs for doing the same on similar mobile hardwares?
I think there are much options for the ODROID than processing the disparity/point cloud from a dedicated asic like in the Intel realsense. The link to Hao-Chih, Lin videos like this one