Connecting Mission Planner to SITL in VirtualBox

I’ve set up a virtual Ubuntu 16.04 in VirtualBox on my Windows 10 host. In the virtual machine SITL is running fine with the ArduCopter source from git.
Now I want to connect Mission Planner 1.3.62 via UDP with SITL, but that doesn’t work so far. When I set Mission Planner to UDP at port 14550 it either keeps connecting forever without success or it shows “Connect Failed” with the following exception:

SITL itself seems to work fine, at least inside the VM I can receive the UDP packets:

Therefore I assume that there is rather a networking problem between Windows and VirtualBox. In VirtualBox I have set up a NAT with port forwarding for port 14550 and I have disabled all firewalls in Windows, but no success yet.

Any help would be highly appreciated!

Is it an option for you to bridge the virtual machine to the host’s subnet rather than using NAT? I find that “just works” rather than having to fiddle with routing through a NAT.

Can you ping the host’s IP from the virtual machine?

You could post the output from “ipconfig /all” on Windows and ifconfig on the virtual machine, and the command used to configure the output from SITL (either on the command line or with the “output add” command). Maybe something will be apparent from that.

Thanks a lot for your reply! Eventually I got it working. The solution was to append
--out=udp:XXX.X.X.X:14550 when running with XXX.X.X.X being the host’s ip address. Now it works with NAT even without port forwarding configured.

Should it normally run without that parameter? Or is this the usual way how it works? Then we might want to add it to the documentation. On there is described a parameter --viewerip=XXX.X.X.X, but this wasn’t accepted by

Viewerip is a parameter to the autotest script. I think the wiki is out of date on this, you want to use and specifiy the GCS IP address either with the --out parameter, or in mavproxy with the “output add” command. I’ve never had much luck with autoconnect, I always specify the address.

We explicitly check to see if we’re running under our Vagrant virtual
machines here:

I guess the point is that we don’t ordinarily want to spew this traffic
all over the place - it can interfere with work being done on real
vehicles at the same time, for example.