Urban Autonomous driving

its currently too dark to keep my solar rover alive so It’s in storage until the spring and I have started building a small rover for driving around town on the pavement and the more I think about it the more complicated it’s getting. Compared to what you need for just driving a rover around the beach or a park where you have wide open spaces and all you have to worry about is the occasional person or dog trying to drive along a pavement next to a road has a lot more variables to worry about, GPS can be intermittent next to buildings, you have kerbs you can fall off, walls to hit, cars, people walking, mobility scooters, bikes etc. I think at the very minimum im going to need a 360 lidar and some kind of machine learning camera to keep it in lane and identify objects infront of it.

Im going to be running this at glacial speeds for safety, I will hard code the motor controllers to just a few rpm.

Has anyone tried to use ardurover for anything like this? I know there are delivery robots out there doing something similar.

You must use RTK, but testing at the same place I noted a great difference between points with trees shadowing GPS and not (so worse with buildings). There was also difference between days testing it (http://gnssplanning.com; try with HDOP<0.4).

Shadowing can be seen with MP displaying number of satellites on logs.


That had occurred to me, I was going to setup the wheel encoders to take over navigation in GPS denied environments.

1 Like

Yes, that also works, but errors acumulate and it was very difficult to do more than one lap (~127m).

Yes, object avoidance is very hard. I think the only way it’s going to work is some kind of AI camera like the OAK-D or similar. A 360 lidar on its own will have too many false positives because it can’t tell the difference between a ramp and a wall.

I was thinking that it would be best to start with relatively simple tasks like driving along a sidewalk where the edges can probably be determined by something like OpenCV.

thats basically my plan, I am looking at some kind of lane keeping system with object detection. I have ordered one of these cameras to get started with using AI, its cheap supports arduino, has lots of examples and looks like it could be fast enough for basic tasks.

my first step is just going to be something very simple like changing the waypoint speed based on if something is detected using the object detection. so if it sees people or pets then it will slow down.

next step will be to get one of the fast lane detection programs running so it can stay on the pavement.

avoidance using the camera would be nice to have especially on a boat where other systems are unreliable.

1 Like

I have been driving my rover around town using Andruav and a SIM racing steering wheel in acro mode to get some experience driving around in an urban environment.

The rover is based on 4 hoverboard motors and has 2 phones running andruv, a wifi camera that can pan and tilt and a 360 lidar for collision avoidance. The hoverboard controllers are hardcoded for a maximum speed of 100rpm, this gives it a top speed of 1.5mph (2.41 km/h). That way, if it goes out of control, it’s not going to go anywhere fast enough to hurt anyone.

So far I have done around 10 miles and I have narrowed it down to a few scenarios that cover most driving around. Almost all driving could be done with just a RTK fix if you drive the path first to get accurate waypoints, it could go around the park OK, but it will need something to nudge it around obstructions. I don’t have the coding abilities or the onboard computing power to do slam so I have been looking at other options using lidar and microcontrollers.

bendy ruler will get it around most obstructions but there will be a lot of edge cases that I want to make some custom modes for.

Centre track - if it detects objects at either side, I want it to stay in the centre of them, for example travelling along a path with a wall on one side and a car on the other. Or crossing a very narrow foot bridge.

Wall follow - if there is a wall at one side stay a set distance from it, what side to follow could be worked out by how consistent it is or object recognition to see what side has the cars. This should really help in areas with poor GPS like when it has to go under a bridge or trees.

Kerb follow - I am looking at being able to detect vertical deviations from the normal terrain height and mark it as obstruction using a multibeam lidar array like VL53l5CX. The sensor puts out a 8x8 array with a range up to 3.5m. So I was thinking of having one at each side of the vehicle and measuring the terrain height around it and marking anything significantly above or below the average value by more than a few cm as an obstruction.

Path following is the hardest part, this is when there is a path with very little vertical difference at either side so no walls or kerbs. Like a path with grass at either side. Some kind of vision system is the only way I can think of doing it, but that gets expensive and complicated.
I really like how this works:

when you think about it, its basically just looking for colours, so I went looking for something that could track colours and found this. It uses an ESP32 to select an RGB value and a range, then it will mask it and track the largest group of that colour. so by making a running average of the colour of the centre of the path I think i can turn it into a path follower.

I am currently upgrading my rover to the LD06 lidar, I have a mavlink adapter for it that im adding some extra navigation functions to like wall follow and centre track.

This was hilarious

1 Like

Looks like you’re creating a bit of a fuss in town. Well done! :rofl:


some basic testing of colour tracking shows that it might be possible to use it for path following. once the RGB values are dialed in it can follow the path, the hard part is going to be figuring out a way for it to learn what colour range to use. I will also need to see if i can get it running on something smaller than a computer, I just looked at the source and its doing the processing in the browser.

Good news, it runs on a phone!

1 Like

I have added a basic auto filter where it will track the RGB value of the probe and apply the filters as the value ±20 so it tracks whatever the dot is on. The new 120 camera with the large lens makes a huge difference.


First test with a youtube video, I’m holding the camera and manually it to follow the blue dot, it looks promising but it still needs some work.

at the moment im just sampling a pixel infront of the rover every fame Im going to change that to a rolling average so its less jittery.

I need to control the range of the RGB filters. im thinking if the box gets too big or the tracking box is no longer at the bottom of the screen, then it will tighten up the filters to reduce error