How strict is the camera about the SEQ field in a packet? Do the sequence numbers have to increase monotonically? How does the camera react if sequence numbers repeat? All example packet in the manual show SEQ==0.
SDK Protocol Format (3.3.1 in the manual) notes that 16-bit fields in the envelope are “Low byte in the front,” i.e., little-endian. The document makes no mention of endiannes for payload data, and none of the example packets contain multi-byte payloads. Are payload fields also little-endian, e.g., 0x05.zoom_multiple?
CMD_ID:0x05------Manual Zoom and Auto Focus: I understand that this command zooms in or out. Does it also command a new autofocus? Is there a way to control zoom without changing focus?
CMD_ID:0x07------Gimbal Rotation controls yaw and pitch. I understand that the gimbal also has a roll motor. Is there a way to control roll?
Previously, I asked about the option to control absolute zoom. I thought there was a command for absolute gimbal positioning. Now that we’re implementing the protocol, I see only relative rotation command (0x07). Is there a way to command the gimbal to specific yaw/pitch angles, e.g, 15°right/30°up? If not, would you consider adding that?
I can find out the current yaw, pitch and roll positions through CMD_ID:0x0D------Acquire Gimbal Attitude, as well as zoom level (0x05). Is there a way to find current focus point?
Thank you for the fast responses. Do you have a rough idea of a timeline for the new SDK? If it’s a long time, we can implement a control loop like @rmackay9 implemented inside ArduPilot, where he reads the gimbal attitude and issues Gimbal Rotation commands until the gimbal points where he needs it. If it’s a short time, we’ll wait for official support.
Thank you Frank. As long as you’re updating the SDK manual, I would like to make a suggestion. I love how you have example packets in HEX (pages 27-28 in V1.0 manual). They make it super-easy for us to write unit tests and have confidence that our code generates the exact bytes that the camera is expecting, including endianness issues. Now I would recommend adding example HEX for packets that the camera might send back, like gimbal attitude. This would allow users to test their code before they have access to physical hardware and speed up the development process.
I agree with @iter that it would be nice to have a command to control the absolute zoom. It’s not urgent though because I’m still working on other camera and gimbal enhancements (like this PR to add support for two cameras).
I think the latest SDK does have an absolute angle control (for pitch and yaw) but I haven’t updated the AP driver yet to use it so maybe i’m wrong.
Randy, have you seen an advance copy of the new SDK documentation? Is it available and I missed it?
I’m selfishly implementing my own driver in Python for a script that runs on a companion computer, so my timeline and yours may be different.
@SIYI : if that part of the SDK is sill in development, may I recommend using physical units like degrees and degrees/second instead of arbitrary ranges line [-100…100] or [0…300]?
I think Frank showed me a screen-shot of the upcoming angle interface. I’m not sure if I’ve seen the entire new SDK doc or not yet.
Re implementing a driver in Python, why not just use the available AP MAVLink interface? If there are controls that are missing we could try and add them. We don’t have a page that describes how to control the camera though I have been planning on adding that.
The advantage of using mavlink and talking to the flight controller instead of the gimbal directly is that it will work with all the gimbals AP supports. It also helps ArduPilot to fill out the interfaces that external developers need.
It’s a difficult decision. It comes down to two and a half issues: context size, features and Ethernet access.
Ethernet is the easier one to explain: I want to record to the companion computer’s drive, where I can keep track of timestamps and file naming conventions better than most cameras’ SD cards. I can live without Ethernet, but once I have an RTSP connection to a camera (Siyi, VISCA, etc) it’s a small step to use the same connection for gimbal control.
Features: I can never predict which features the larger AP community finds useful. Zoom is important in my application, as well as focus–I hate videos where all the interesting parts are unusable because autofocus is hunting for faces in the nothingness of a flat blue sky. Seeing as I know the exact distance from camera to target, both are easy to set if the camera supports it. I’m hearing from you here that zoom is easy for you to add to AP. This can change my calculus about writing my own driver.
Size of context: Siyi SDK documentation is 8 pages including sample packets in hex. It’s a self-contained system with clear boundaries. VISCA documentation from Sony is the same way. With AP–when it comes to more out-of-the-way features–just finding out if a feature exists can be a project. My questions about SYSID_TARGET in the other thread are an example. This is not a criticism of AP. AP is an amazing piece of software. But for less popular features, the only way to understand how they work is to read the code, and it’s never just one or two source file. My main tool for understanding AP features is grep(1). I understand and accept that this is how a project evolves when it is an amalgamation of many developers over many years scratching different itches. I’d rather have it like that than any other option. But when the choice is between writing 300 lines of Python or spending weeks chasing down features in an evolving source tree–it’s a difficult choice.
All this said, in this particular case, I’m only 300 lines of Python in. If my itch is something the larger AP finds useful, I’m happy to work with you instead of by myself. We should probably take this part of the conversation to the SYSID_TARGET thread.
All three boards come with the gimbal as you see in the unboxing video, if you are going to use the third board, it should be placed inside your frame and the screws are going for it
@SIYI
Video was recorded on companion board (not SD card), which is connected directly with ZR10 through LAN cable. Companion board caught RTSP stream from camera, and put it into MP4 container.
Also, there were a lot of warnings:
The exact way your record and save video (is the recording speed enough)
The way the companion computer processes data (or it does not)
It dose not look like a SIYI issue so far. But I am happy to help further on it. Just note that we don’t have the exactly same setup here to repeat the issue.
I downloaded the PC Assistant. I’m trying to change the camera’s IP address. I enter the new address (192.168.1.25) and the program tells me that I need to reboot the camera. When I reboot, the program again reports the old address (192.168.144.25).