Uavcan Gimbal xyz

Olli, I wasn’t aware of this post until now because I’m not subscribed to the forum. Since this is a UAVCAN-related matter, perhaps it would have been met with more attention (at least from myself) if you posted it via the UAVCAN mailing list instead. :wink:

Of the approaches you helpfully proposed, only the first one is viable. I have attempted to explain why vendor-specific data types will never work well in this post here:
UAVCAN GPS get_lag().

I suggest we stick to quaternions rather than Euler angles because quaternions are more versatile and therefore they are used almost everywhere in UAVCAN. So we should keep using them for the sake of consistency.

You seem to be right at choosing float32 instead of float16. My quick analysis here shows that a float16 based quaternion offers a rather low angular resolution of about 0.3 degrees.

Besides the direct orientation commands, the standardized gimbal control interface should support Geo POI mode, where the gimbal is supplied with a geo coordinate of interest and keeps the camera pointed towards it. To support this feature, the gimbal needs to be aware of its own geo position and orientation; thanks to UAVCAN, both can be easily extracted from the bus.

Having two separate commands AngularCommand and GEOPOICommand was a questionable idea (GEO is supposed to be Geo, another mistake). They should be merged into a single command message with a union inside of it.

What is your opinion on the gimbal status message? The current definition reports the gimbal orientation in the body frame, but I see some value in using the world frame instead. The advantages of the world frame are that it would be consistent with the AHRS solution message. The benefits of the local frame are that it would work well with gimbals that deduce their orientation from the encoders rather than from an independent IMU.

We probably should add a set of camera command messages as well, which should probably go in a separate namespace, since I would consider the gimbal and the camera to be distinct and independent entities. What options should be necessary there? The basics such as the following come to mind:

  • Take a photo.
  • Record video (note that we need to keep the protocol stateless, hence “record video” as opposed to “begin recording”/“stop recording”).

The general configuration options, such as exposure settings, probably would not require any separate interface, because there is already the standard configuration management API that should suit the task splendidly. Perhaps we should define a set of standardized names for camera-related configuration parameters, such as exposure settings, compression parameters, et cetera. Although that can (and should) be postponed for now.

Pavel.