top of page
RoboND-Search-And-Sample_Moment(3).jpg

Autonomous Mars Rover

Udacity Project #1

Aug 2018

The goal of the project was to develop the perception, decision, and control algorithms for an autonomous rover to navigate and map an uncharted canyon. The project was written in Python, using OpenCV for image processing.

I used color thresholding to classify terrain as navigable, obstacles, or rock samples, as well as a perspective transform to build a map of the terrain. I then implemented a wall-following algorithm, such that the robot tries to stay 1.5 m from the right wall. When there are obstacles in front of the rover, it will stop, and turn in place counter clockwise. If the rover is ever stuck, it will try turn in place, and move forward until unstuck. Also, if it sees a rock close by, it will go and pick it up.

Below is a video of the rover autonomously navigating the canyon (sped up 8x). After 12 minutes, it maps 81% with a fidelity of 92%, and has located all the rock samples, and collected five.

Autonomous Mars Rover: Project
Autonomous Mars Rover: Project

One drawback of the current setup is that it is currently memoryless, meaning it only makes decisions based on the current camera view (aside from checking if it is stuck or not). One example of this shortcoming is when the rover is following the right wall, then sees a rock on the other wall, goes to pick it up, then continues following this new wall. In this case, the rover does not end up exploring the rest of the canyon (until it loops all the way back around). Thus an improvement would use the developing map to avoid places already visited and explore new areas. This could be an algorithm that tries to 'close off' navigable areas with obstacles or aim for areas that do not have terrain classifications. There could also be room for improvement by tweaking certain parameters, such as the maximum velocity, or proportionality constant for the P control steering, etc.

Autonomous Mars Rover: Homepage_about
bottom of page