I would go with the Auvidea J121 board. This is their latest development for the TX2 to work out of the box in combination with the latest Jetpack releases. You can use the J120 as well but you need to do some kernel patches.
You will spend a multiple of time and money to get other boards working with the TX2. Trust me, I went through all of this over the past months
Btw, if you are interested in getting the ZED for computer vision and obstacle avoidance along with Arducopter you may take a look at my current project, based on the Nvidia Redtail project.
Edit: you need an USB 3 port for the camera. UART to connect with the Pixhawk. Plus an USB 2 to connect with other peripherals like mouse, joystick, keyboard.
HDMI to connect with a monitor for development and testing. A lan network cable connection is useful during development.
Thank you @mtbsteve. I must say I’m a little surprised by the “quality” of those carriers. To me, it’s not something really sophisticated (like NVIDIA chip itself). And they’ve got those stickers with NVIDIA blessing
Is it only a problem with keeping up with the latest Jetpack release or in general they are buggy?
As for your project, is there a topic already started on the forum?
I can’t comment on the quality of the different carrier board alternatives.
The main issue is to keep up with the evolution of the Jetpack releases. E.g. there were plenty of device tree changes just recently introduced between Jetpack 4.2 and 4.2.2. It took me a couple of days of kernel tweaking, supported by the Auvidea staff, to enable all interfaces on the J120 with Jetpack 4.2.2. I would assume that with a cheap China clone, things will be even more complicated.
Of course you may decide to stick with old releases such as Jetpack 3.2/3.3 which is the basis for most of the 3rd party gear around (like the dji link you posted above), but then you miss all the nice new stuff Nvidia is constantly adding for machine learning, AI, computer vision and so on.
Ok, @mtbsteve, now I fully understand your point. I’ll probably wait a little with purchase. Maybe I will hunt something used chepear and if not I’ll buy this new version of Auvidea you recommended.
I’m not an expert with Jetpack (I just recently bought my TX2), but I don’t see any benefit from the stuff they are including. I mean if I could install camera drivers, ROS, OpenCV and LibTorch (PyTorch C++) that’s enough from my point of view [edit: new Jetpack is based on Ubuntu 18.04 and this is (or will be in futere) a benefit]. In fact, I feel like NVIDIA is promoting their TensorRT library claiming it speeds up networks. It may be true with the sample networks they prepared, but if you want to use current state of art implementations of PyTorch or Tensorflow you will waste a lot of time trying to change your network to (partially) use TensorRT and in the end there could be no be benefit.
By the way, I’m a mathematician and more from ML area than robotics, but I feel like the non-NN “traditional” approach to obstacle avoidance like https://github.com/PX4/avoidance is the right choice. I mean, for pure avoidance, it doesn’t matter what are the object boundaries (segmentation) and category, it’s enough that there’s “something” detected. And the whole new path planning is more important. What is your opinion?
Yes. Mine is based on the original Nvidia Redtail project and incorporates the Arducopter-specific changes form the GSoC 2018 project, but using the ZED along with the stereoDNN networks instead of the monocular camera used in GSoC.
I dont see a contradiction with the PX4 avoidance project. They came up with a nice way to do path planning based on the information stored in the 3D point cloud which could be added once someone spends the effort to migrate it from PX4 to Ardupilot.
Yes for the ZED camera, use the zed camera nodes which are exposed by the ZED-ros-wrapper https://github.com/stereolabs/zed-ros-wrapper
There you get the rectified and calibrated camera views ready for the caffe-ros trailnet (monocular only) network or the StereoDNN network.
@mtbsteve, I’ve rewritten TrailNet in Tensorflow2 based on Resnet50 and planning to test its performance on TX2. Unfortunatelly the second part of dataset for lateral shift detection is not available from authors so it’s not possible to replicate complete solution.
But in the meantime I wondered if even more practical would be flying along the road. There are many datasets available from car view. What do you think?
BTW I bought used Orbitty carrier
Hi @mtbsteve, I had completed software development based on orbitty carrier board but I decided to go for auvidia J120 now my question is that possible or it’s a hassle to do so?
It’s really sounds dumb to start with a carrier board and decide to switch to another but that’s what happened because I asked a friend before you started with Orbitty and he said it easy to switch to auvidia but I’m hearing it’s not an easy task!
Looking for your advise.