Warning: Parameter 2 to qtranxf_postsFilter() expected to be a reference, value given in /customers/e/7/6/akji.dk/httpd.www/wp-includes/class-wp-hook.php on line 298 Warning: Parameter 2 to qtranxf_postsFilter() expected to be a reference, value given in /customers/e/7/6/akji.dk/httpd.www/wp-includes/class-wp-hook.php on line 298 Vision for navigation of RoboCup-robot | AKJ Inventions

Vision for navigation of RoboCup-robot


For the annual robot-contest at DTU (RoboCup) in 2004 there were constructed a mobile robot with an embedded stereo vision system used for navigation. More information about the RoboCup-contest can be found
here.

The software taking care of the vision navigation is coded in GNU C++ with help from the OpenCV-library, and executed on an embedded PC running Linux. The software consists of a number of layers on top of each other. The lowest layer grabs the raw images from the two webcams (stereo). From the top layer navigation data such as direction and speed needed to follow a wall or find a gate can be read.
The layers are arranged so that the higher they are the more intelligent they get. The layer structure is illustrated on the figure to the right.

Generally the model consists of a CNavigator-object, consisting of two CEye-objects. Each representing one eye for the photogrametry measurements. The raw picture from the webcam is first sent through a specially designed imagefilter. Where it is transformed into a more simple picture without unnessesary details. In other words the picture is transformed into an array of hue-values (CCartoonImage).

By having the picture in this format, it is now easier to extract edges and objects in the image.
The edges are found in the Edgedetector-layer and saved as a binary image. From this binary image, the edges can be simplified even more, by describing the edges by lines instead of dots. So that a white line in the image is described by a number of parallel edge-lines. The operation takes place in the Edgeline-layer.

The simplification of the images is improved one final time by assembly of the edge-lines into 2D polygons. Each polygon consists of a number of edge-lines encircling the same object in the picture. In this way a polygon consists of a set of edgelines and a common color, being the most frequent of the object. The layer taking care of this is called the Polygon-layer. It also consists of a filter removing all polygons having colors without interest.

When both "eyes" (CEye-objects) have found a set of polygons for their image, the stereoscopic process can begin. This process is also called photogrametry and is executed in the Photogrametry-layer. Normally automated photogrametry system are very are hard to implement succesfully, since the computer has to pair two pixels in two images. Because of the extreme simplicication of the image information in this system, the pairing process is much easier. The system only has to pair polygon corners, not single pixels. The polygon paring is done by matching position, color, number of edges and nabour polygons. After that 3D coordinates are assigned to each corner of the polygons using basic photogrametry rules.

By calculating the difference on two sequent 3D polygon-sets, data about the robot movement, such as speed and direction can be extracted. Theese calculations take place in the "Movement Detection"-layer. The output from this layer can be used for calculating odometry / absolute position. In theory the data about the speed could be used as feed-back for the motor controllers.

Initally the plan was to make the robot generate a complete 3D model of the course used for navigation. But it was a too big task for this project. Therefore the top layer is limited to take care of only specific objects found on the RoboCup-course such as white lines, gates, walls etc.

The report for the project can be found here (in danish):
Vision til navigering af RoboCup-robot.pdf

For more information about this project, or a quote for a similar project, please fell free to contact Allan Krogh Jensen.

System overview
Shows all the layers of the system.

The robot
The system mounted on a standard SMR robot from DTU.


Embedded PC

The hardware consists of this embedded PC with a 433 MHz VIA-processor and two webcams..