Vision system: Stereo camera.
Event representation:
- 1D - data representing the orientation or movement direction of the robot.
- 2D - log-polar grid where each bin represents distances to the robot. The data represented there are the distance and the angle. The closest bins represent a more precise space measurement (5cm) while the ones in the periphery area represent bigger distances (5m).
- 3D - this representation is a combination of 2D and 1D representations. So it represents 1D orientation and 2D position.
Obstruction sensors = Laser-Range-Finder, Infra-Red-Transceivers
Object-Detection- and Object-Classification Sensors = Vision
Heading-Direction Sensors = Compass, Gyroscope
Ego Motion = Wheel Integration, Optic Flow
Abstract events = Neighbors, Explored Space
For each sensor they use a different local grid. These grids are stored in containers O and can be addressed at once or individually.
Incoming events are separated and multiple slices of representations are kept, separated by time, distance traveled and angle. To have a full description of the environment all these slices are merged in a place-signature when a comparison with an already visited place is required.
The comparison between two containers O1 and O2 is performed by convolution, i.e. O2 is rotated and translated and compared to O1 until the offset between the two containers is minimum. This comparison is performed with the three types of events representations.
Functionality of this approach (a few examples):
- add(single slice)
- add(other O)
- enterNewSensorData(id,data)
- getPlaceSignature()
- setDecay(delta_t,delta_d)
- distance(other O)
- findFreeDrivingAngle()
- findBestPlace()
- getNeighborList()
Local Obstacle Avoidance. The robot permanently monitors its internal representation of the current local environment and compute a detour trajectory if possible trajectories found by a breadth-first search on a grid of hypothetical free positions in increasing steps from robot's the current position.
Path Integration. The robot continually updates a 3-dimensional path-integration vector representing distance, direction and orientation traveled. This vector can be modified by the Place Agents (PA). The robot provides this information to the currently active PA, which might use it to estimate distances to its neighbors.
Note: It uses artificial visual landmarks.
No comments:
Post a Comment