Updating Gazebo model's position from SITL

Bit more investigation:

There is also a service available in Ignition to set the pose. It can be run from the command line with:

ign service -s /world/move_model/set_pose --reqtype ignition.msgs.Pose  --reptype ignition.msgs.Boolean  --timeout 300 --req 'name: "box", position: {x: 0, y: 0, z: 10}'

where move_model is the name of the world and box is the name of the model to move. It turns out that you do need to have the physics system enabled for this to work, but you can mark the model as static so it won’t fall under gravity between updates.

If I add the python bindings for services to the python-ignition project, then this can be run from a script. We could use pymavlink to get the pose information from ArduPilot and I think that’s everything we’d need.

A follow camera would complete the ask, and that is already available in Ignition and it can be repositioned using another service if different viewpoints are required.

Update

It was straightforward to add the Python bindings for the service interface. A script corresponding to the command line above is:

from ignition.msgs.header_pb2 import Header
from ignition.msgs.pose_pb2 import Pose
from ignition.msgs.vector3d_pb2 import Vector3d

from ignition.transport import Node

# create a transport node
node = Node()

# create a pose
vector3d_msg = Vector3d()
vector3d_msg.x = 0.0
vector3d_msg.y = 0.0
vector3d_msg.z = 10.0

pose_msg = Pose()
pose_msg.name = "box"
pose_msg.position.CopyFrom(vector3d_msg)

# submit a request to a service (blocking)
result = node.request(
  service="/world/move_model/set_pose",
  pose_msg,
  timeout=300,
  reptype="ignition.msgs.Boolean")

This works, however there seems to be an issue when the update rate is increased. If the pose update is greater than about 5Hz the GUI locks. It’s not yet clear what is going on in Ignition to cause this - possibly a race condition or something similar. It’s not the service request as that commits the request to a command buffer and returns correctly (i.e. the fact the service request is blocking on the Python script does not seem to be the bottleneck).