Servers by jDrones

Jetson TX2 carrier

Hi, is there any cheaper alternative to Auvidea J120? In their shop currently it costs 284,41 EUR. This is comparable cost to TX2 DevKit itself (seems crazy, because DevKit has it’s own carrier).
There’s a list of carriers on https://elinux.org/Jetson_TX2#Carriers and for example I’ve read about ConnectTech Orbitty on DIJ web: https://developer.dji.com/onboard-sdk/documentation/sample-doc/advanced-sensing-object-detection.html. It costs 174 USD, so significantly lower, but still I feel like this kind (maybe with less peripherals) of carrier should be no more than 100 EUR.
Maybe there are some non-official chinese “replacements”?

I need it basicly for object detection tasks (TX2 + Realsense D435 or ZED Stereo + MAVROS).

From what I see in docs (http://ardupilot.org/dev/docs/companion-computer-nvidia-tx2.html) only 12V power supply and UART are really needed (plus USB for camera), am I right?

Regards,

Edit: Didn’t notice there’s a Hardware -> Companion Computers category. If somebody has admin rights, please move it there :slight_smile:

I would go with the Auvidea J121 board. This is their latest development for the TX2 to work out of the box in combination with the latest Jetpack releases. You can use the J120 as well but you need to do some kernel patches.

You will spend a multiple of time and money to get other boards working with the TX2. Trust me, I went through all of this over the past months :slight_smile:

Btw, if you are interested in getting the ZED for computer vision and obstacle avoidance along with Arducopter you may take a look at my current project, based on the Nvidia Redtail project.

Edit: you need an USB 3 port for the camera. UART to connect with the Pixhawk. Plus an USB 2 to connect with other peripherals like mouse, joystick, keyboard.
HDMI to connect with a monitor for development and testing. A lan network cable connection is useful during development.

1 Like

Thank you @mtbsteve. I must say I’m a little surprised by the “quality” of those carriers. To me, it’s not something really sophisticated (like NVIDIA chip itself). And they’ve got those stickers with NVIDIA blessing :slight_smile:
Is it only a problem with keeping up with the latest Jetpack release or in general they are buggy?

As for your project, is there a topic already started on the forum?

Regards,

I can’t comment on the quality of the different carrier board alternatives.
The main issue is to keep up with the evolution of the Jetpack releases. E.g. there were plenty of device tree changes just recently introduced between Jetpack 4.2 and 4.2.2. It took me a couple of days of kernel tweaking, supported by the Auvidea staff, to enable all interfaces on the J120 with Jetpack 4.2.2. I would assume that with a cheap China clone, things will be even more complicated.

Of course you may decide to stick with old releases such as Jetpack 3.2/3.3 which is the basis for most of the 3rd party gear around (like the dji link you posted above), but then you miss all the nice new stuff Nvidia is constantly adding for machine learning, AI, computer vision and so on.

Ok, @mtbsteve, now I fully understand your point. I’ll probably wait a little with purchase. Maybe I will hunt something used chepear and if not I’ll buy this new version of Auvidea you recommended.
I’m not an expert with Jetpack (I just recently bought my TX2), but I don’t see any benefit from the stuff they are including. I mean if I could install camera drivers, ROS, OpenCV and LibTorch (PyTorch C++) that’s enough from my point of view [edit: new Jetpack is based on Ubuntu 18.04 and this is (or will be in futere) a benefit]. In fact, I feel like NVIDIA is promoting their TensorRT library claiming it speeds up networks. It may be true with the sample networks they prepared, but if you want to use current state of art implementations of PyTorch or Tensorflow you will waste a lot of time trying to change your network to (partially) use TensorRT and in the end there could be no be benefit.

Is your Redtail project somehow related to/continuation of GSoC 2018: Complex Autonomous Tasks Onboard a UAV using a Monocular Camera ?

By the way, I’m a mathematician and more from ML area than robotics, but I feel like the non-NN “traditional” approach to obstacle avoidance like https://github.com/PX4/avoidance is the right choice. I mean, for pure avoidance, it doesn’t matter what are the object boundaries (segmentation) and category, it’s enough that there’s “something” detected. And the whole new path planning is more important. What is your opinion?

Yes. Mine is based on the original Nvidia Redtail project and incorporates the Arducopter-specific changes form the GSoC 2018 project, but using the ZED along with the stereoDNN networks instead of the monocular camera used in GSoC.

I dont see a contradiction with the PX4 avoidance project. They came up with a nice way to do path planning based on the information stored in the 3D point cloud which could be added once someone spends the effort to migrate it from PX4 to Ardupilot.

@mtbsteve, as you are going to use ZED which has its own depth algorithm in CUDA:

  1. Are you aware what stands behind ZED algorithm? In particual is it NN-based?
  2. Are you aware of any benchmarks (eg. on KITTY) on TX2: ZED vs. stereoDNN vs. OpenCV (also CUDA)?
  3. Are you going to do any inference besides depth? I mean TrainNet and object detection like in Redtail

Regards,

BTW I think PX4 avoidance would benefit from using CUDA (should be considered while porting by someone)

I don’t know.

see the publications from Alexey/Nvidia

Yes I intend to do so.

Btw. PR’s are always welcome :slight_smile:

Seems like in case of ZED Alexey recommends using ZED depth, see https://github.com/NVIDIA-AI-IOT/redtail/issues/76#issuecomment-408553759

Yes for the ZED camera, use the zed camera nodes which are exposed by the ZED-ros-wrapper https://github.com/stereolabs/zed-ros-wrapper
There you get the rectified and calibrated camera views ready for the caffe-ros trailnet (monocular only) network or the StereoDNN network.

Ok, you are right. This remark was just for mono.

BTW I’m gonna try to rewrite TrailNet in Tensorflow2 (Caffe is dead)

@mtbsteve, I’ve rewritten TrailNet in Tensorflow2 based on Resnet50 and planning to test its performance on TX2. Unfortunatelly the second part of dataset for lateral shift detection is not available from authors so it’s not possible to replicate complete solution.
But in the meantime I wondered if even more practical would be flying along the road. There are many datasets available from car view. What do you think?
BTW I bought used Orbitty carrier :slight_smile:

Cool :sunglasses: please share your performance results.

For 180EUR there is also J90 https://auvidea.eu/product/70760/ or for 100EUR low profile https://auvidea.eu/product/70761/ if you don’t need an IMU. It does carry a health warning on Jetpack version

1 Like
Servers by jDrones