Automatic charging

I’m building a rover with electric wheelchair chassis/24v for lawn use. I have rtk gps and companion computer (PI) and pixhawk 2.4.8. I am planning to build an auto-charging dock and want to know if there is anything out there to help accelerate this process before I recreate the wheel.

If my rtk gps is not good enough, I was going to use a raspberry pi to read square fiducial markers, as discussed here: https://docs.opencv.org/3.1.0/d5/dae/tutorial_aruco_detection.html

I work daily with swarm robotics platforms and you will almost certainly have to use fiducials. This isnt because RTK doesnt have the precision but because RTK is sensitive to loss of lock in cluttered environments so unless your charging station is well away from any building with some giant extension cord you will most certainly loose lock or precision or both. Also auto alignment can be hard without a fiducial aid.

The RTK will make your life a lot easier though as getting close enough to see the fiducial in the first place can be a challenge.

I will say you can look into ROS (robot operating software) for a way to do less working getting into Open CV and its a good environment for your Companion Computer to work with. I found MavROS communication to be supremely easy and then I just used ROS to do all the transforms between robot position and marker position. The only thing I needed to add is how to move relative to the position data and feed that back into the pixhawk.

Thanks a lot for that response. I am going to put the charger just inside the garage and use a wireless relay to open the door. Inside the garage I may very well lose the rtk fix and drop into “float” mode and in that case I will need the fiducial aid(s).

I will look into ROS as you suggest.

I am using the PI as the companion computer and have it hooked to my Pixhawk over USB. I was wondering how to make that work rather than the more common serial Mavlink hookup.

Also, I would love to hear more details about your work because I was also toying with the idea of using swarm to have multiple mowers work at once and use fiducial aids to get away with cheaper GPSs in the follower vehicles and have them follow several feet behind the leader and offset a foot or two. I am using a ZED-F9P in the lead vehicle which gets fixes very fast but is somewhat expensive.

I am very interested in this as well. I eventually want to have a machine that can position itself precisely back at a “home base” as well. I am using C94-M8P base and rover right now but have bought two F9P eval boards as well. The fiducials sound like the way to go for the precise movement at the base.

1 Like

fyi-someone is already doing this:

http://www.xaxxon.com/oculusprime/shop_accessories

xaxxon sells a docking station and I think it is ROS compatible and it has some kind of a very basic marker on it.

It looks like a good design for a small, indoor robot. I may buy one of these and see if it can dock with my wheelchair rover by changing the input transformer to 24v.

1 Like

This is a great feature to have added to AP. I guess the onboard vs offboard decision is an important one. I would actually prefer to have an onboard solution although AP can certainly support both.

If done onboard I would recommend trying to re-use parts of Copter’s precision landing feature. It really is very similar except that the target is ahead of the vehicle instead of below it. We already support a number of sensors including IRLock and mavlink enabled cameras.

By the way, I’ve added this project (a few weeks ago) to our Google Summer of Code projects list so it’s possible a student will join us this summer to implement this but there’s no guarantee because the students decide which project they want to to work on and the number of project ideas greatly outnumbers the number of students we will take on.

1 Like

Incorporating this into the AP would be a great value add. Being new to the Ardurover I was unaware that this was possible. Now that I have read up on the copter precision landing methods I see the light. Also the IRLock stuff would do the job also but would require a beacon to be built into the docking station.

I will say that even the best fiducials out there take a good amount of processing power without dedicated hardware and you might find a PI to be too slow for your needs. I would expect between 0.1 and 0.5 hz for your position estimate update rate assuming compatibility of the software with a PI in the first place.

as others have suggested using something like IRLock might be more suitable and let you get away with a cheaper companion computer such as the PI.

As for your other question. Coordination between swarms of robots is a difficult task that with current technology is mostly solved via expensive hardware rather than clever software. If I where you I would suggest using some form of active sensing to determine the position of the leader relative to the followers or to use a cheaper RTK in relative mode on all the followers.

You could also consider things like an RPLidar on all of the units as they are relatively cheap for the A1 and A2 models.

I work on swarm robotics for resource search and retrieval. The lab focuses on algorithms designed to solve the problems related to congestion and efficient coordination without a centralized control system.

@Jeffrey_Berezin,

Yes, IR Lock should work. For a more reliable solution it might be good if we tried to use an OpenMV camera which can recognise an April Tag (at 10hz or more)

1 Like

Thats pretty good, ive never heard of it but an recognizing april tags at 10hz would give phenomenal performance. For 65 bucks thats not bad.

That would also then allow you to use april tags to recognize your leaders position and maintain a relative offset easily enough.

I did some reading trying to decide between April Tags and Aruco markers.

I like that April Tags has a c-language implementation with no dependencies.

But then I read about ChromaTags which can be much faster to detect than others but has a lot of OpenCV dependencies (not a show stopper).

Then I read this paper and it sounded great with very low decode times but then I saw they were testing with a 3.5 GHz Intel i7 Ivy Bridge processor which is more power than I want to put into my mower! But this paper is very interesting so I may give it a shot with the PI and see how long it takes to get a pose: https://arxiv.org/pdf/1708.02982.pdf

If that doesn’t work, here is a new board that is compatible with the Pi camera and has more horsepower: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit

1 Like

Thanks for continuing to blaze the trail for a slower guy like me! Need…more…time…

Kenny, Happy to help. I’ve been watching your videos and you are way ahead of me. You have something that actually cuts grass!

1 Like

We use april tags in our lab but there is a lot I dont like about them. They can be hard to detect at large angles especially under variable lighting. In fact tags half in and half out of shadow tend to go in and out of detection. Also the range that they can be detected at is very resolution dependent and angle dependent.

A lot of these problems exist for other tags but I cant speak for them directly as I haven’t used them. If it where up to me I would run a powerful desktop and get a baseline comparison of all the tags then assume linear scaling for the robot system since a lot of the solutions are tag specific.

That OpenMV camera can probably do most any tag that is openCV oriented. April tags does use openCV fyi.

1 Like

Thanks for your input. It is nice to hear some actual real world experience with the April tags because when I read something, I may try a desktop approach if nothing is working within a day or two.

I like the idea. Once battery performance improves a little it will be viable for myself with a 6’ mower to move away from diesel. @ktrussell you’d be in a similar position. IR lock with ArduRover / precision landing type arrangement sounds good.
My mower is largely depending on Cartographer SLAM/rplidar at the moment. Just starting to try with ZED camera as an alternative to lidar. Also experimenting with a Jetson Nano to see if it can cope. If not, then a TX2 should manage.
I’d still be interested in working towards nice parking in the shed at a correct orientation. Cheers :+1:

1 Like

I am going to keep trying with tags for a while. This is taking longer than I anticipated to setup ROS and OpenCV on Raspian. I am now trying on ubuntu mate for PI. If that doesn’t work, I will try a windows 10 NUC machine I have which is not what I want long term in the rover.

We use Ubuntu Intel Celeron up thru I7 nucs in our ground robots with good results. We have used odroid XU4, beagle bone blue, and a pi 3 as various companion computers for our air vehicles.

All my experience is Ubuntu related for this particular use case.

I was able to successfully read Apriltags with ubuntu mate 18.04 on Pi yesterday.

I am now working on improving performance which is about 1 read per second using PI 3B using a demo program which is popping up graphic windows and things which won’t be needed when running in the robot.

1 Like

I know thats an old thread, but: I am running opencv-python on my raspi3b with 30 fps … hence without any graphic output, but still with a cv2.imgshow(640x480) over a forwarded x11-connection its more than 10 fps