Abstract
In this paper, we present a new bimodal attention
system for robotic applications capable of
processing data from different sensor modes
simultaneously. Considering several sensor
modalities is an obvious approach to regard a
variety of ob ject properties. Nevertheless,
conventional att ention systems only regard the
processing of camera images. In contrast to these
systems, the input data to our system is provided by
a bimodal 3D laser scanner, mounted on top of an
autonomous mobile robot. In a single 3D scan pass,
the scanner yields range as well as reflectance
data. Both dat a modes are illumination independent,
yielding a robust approach that enables all day
operation. Data from both laser modes are fed into
our attention system built on principles of one of
the standard models of visual attention by Koch &
Ullman. The system computes conspicuities of both
modes in parallel and fuses them into one saliency
map. The focus of attention is directed to the most
salient points in this map sequentially. We present
results on recorded scans of indoor and outdoor
scenes showing the respective advantages of the
sensor modalities enabling the mode-specific
detectio n of different ob ject
properties. Furthermore, we show as an application
of the attention system the recognition of ob jects
for building semantic 3D maps of the robot's
environment. Key words: visual attention, saliency
detection, bimodal sensor fusion, 3D laser scanner
Users
Please
log in to take part in the discussion (add own reviews or comments).