scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Color Vision-Based Lane Tracking System for Autonomous Driving on Unmarked Roads

01 Jan 2004-Autonomous Robots (Kluwer Academic Publishers)-Vol. 16, Iss: 1, pp 95-116
TL;DR: The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter.
Abstract: This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that ensures vehicle stability. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation system demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination conditions.

Summary (4 min read)

1.1. Motivation for Autonomous Driving Systems

  • The deployment of Autonomous Driving Systems is a challenging topic that has focused the interest of research institutions all across the world since the mid eighties.
  • Apart from the obvious advantages related to safety increase, such as accident rate reduction and human life savings, there are other benefits that could clearly derive from automatic driving.
  • Thus, on one hand, vehicles keeping a short but reliable safety distance by automatic means allow to increase the capacity of roads and highways.
  • Likewise, automatic cooperative driving of vehicle fleets involved in the transportation of heavy loads can lead to notable industrial cost reductions.

1.2. Autonomous Driving on Highways and Extraurban Roads

  • The techniques deployed for lane tracking in this kind of scenarios are similar to those developed for road tracking in highways and structured roads, as long as they face common problems.
  • The group at the Universitat der Bundeswehr, Munich, headed by E. Dickmanns has also developed a remarkable number of works on this topic since the early 80’s.
  • Likewise, another similar system can be found in Lutzeler and Dickmanns (2000) and Gregor et al. (2002), where a real autonomous system for Intelligent Navigation in a network of unmarked roads and intersections is designed and implemented using edge detectors for lane tracking.
  • The complete navigation system was implemented on BABIECA, an electric Citroen Berlingo commercial prototype as depicted in Fig.
  • Additionally, a live demonstration exhibiting the system capabilities on autonomous driving was also carried out during the IEEE Conference on Intelligent Vehicles 2002, in a private circuit located at Satory , France.

2.1. Region of Interest

  • Nonetheless, the use of temporal filtering techniques (as described in the following sections) allows to obtain finer resolution estimations.
  • The probability of finding the most relevant road features is assured to be high by making use of a priori knowledge on the road shape, according to the parabolic road model proposed.
  • Thus, in most cases the region of interest is reduced to some portion of image surrounding the road edges estimated in the previous iteration of the algorithm.
  • This is a valid assumption for road tracking applications heavily relying on the detection of lane markers that represent the road edges.
  • This restriction permits to remove nonrelevant elements from the image such as the sky, trees, buildings, etc.

2.2. Road Features

  • The combined use of color and shape restrictions provides the essential information required to drive on non structured roads.
  • This makes highly recommendable the use of a color space where a clear separation between intensity and color information can be established.
  • Hue represents the impression related to the predominant wavelength in the perceived color stimulus.
  • This could save some computing time by avoiding going through the trigonometry.
  • 1990; Rodriguez et al., 1998), the HSI color space has exhibited superior performance in image segmentation problems as demonstrated in Ikonomakis et al. (2000).

2.3. Road Model

  • The use of a road model eases the reconstruction of the road geometry and permits to filter the data computed during the features searching process.
  • More concretely, the use of parabolic functions to model the projection of the road edges onto the image plane has been proposed and successfully tested in previous works (Schneiderman and Nashman, 1994).
  • A second order polynomial model has only three adjustable coefficients, also known as Simplicity.
  • Discontinuities in the road model are only encountered in road intersections and, particularly, in crossroads.
  • The adjustable parameters of the several parabolic functions are continuously updated at each iteration of the algorithm using a well known least squares estimator, as will be described later.

2.4. Road Segmentation

  • Image segmentation must be carried out by exploiting the cylindrical distribution of color features in the HSI color space, bearing in mind that the separation between road and no road color characteristics is nonlinear.
  • According to this, pixels are divided into chromatic and achromatic as proposed in Ikonomakis et al. (2000).
  • This turns the segmentation stage into a position dependant process.
  • Thus, for pixels clearly located out of the road trajectory, the chromatic and luminance distances to the road pattern color features should be very small in order to effectively be segmented as part of the road.
  • Seven a priori road models are utilized for this purpose, as depicted in Fig.

2.5. Handling Shadows and Brightness

  • Shadows and brightness on the road are admittedly the greatest difficulty in vision based systems operating in outdoor environments (Bertozzi and Broggi, 1998).
  • B ≥ 1 3 I ≤ Iroad,avg − 2 · σroad (15) where b stands for the normalized blue component; Iroad,avg represents the average intensity value of all road pixels, and σroad is the standard deviation of the intensity distribution of road pixels.
  • This technique permits to enhance the road segmentation in presence of shadows, and remarkably contributes to improve the robustness of the color adaptation process, particularly in stretches of road largely covered by shadows.
  • Analytically the condition is formulated in Eq. (16).
  • The improvement achieved by attenuating both brightness and shadows, as described, permits to handle real images in real and complex situations with an extraordinary high performance, becoming an outstanding point of this work.

2.6. Estimation of Road Edges and Width

  • The estimation of the skeleton lines of the road and its edges is carried out basing on parabolic functions, as previously described.
  • These polynomial functions are the basis to obtain the lateral and orientation error of the vehicle with respect to the center of the lane.
  • On the other hand, ŷl(0), ŷc(0), ŷr (0) are the initial estimations for the left edge, right edge, and skeleton lines of the road, respectively, while yli , yri,yci stand for the left edge, right edge, and skeleton lines of basic pattern i .

2.6.2. Estimation of the Skeleton Lines of the Road.

  • The skeleton lines of the road at current time instant, ŷc(t), is estimated based on the segmented low resolution image and the previously estimated road trajectory, ŷc(t − 1).
  • Thus, the estimation is realized in three steps as described below.
  • The estimation of road edges is realized using the same filtering technique described in the previous section.
  • For each line in the area of interest, the closest measurements to the middle of the left edge validation area, defined by ŷc(t)−Ŵ (t −1)/2, and right edge validation area, defined by ŷc(t) +.
  • An individual road width measure wi is obtained for each line in the region of interest, by computing the difference between the left and right edges (ŷl(t)|x=xi and ŷr (t)|x=xi , respectively) as expressed in Eq. (20).

2.7. Road Color Features Update

  • After completing the road edges and width estimation process, the HSI color features of the road pattern are consequently updated so as to account for changes in road appearance and illumination.
  • Intuitively, pixels close to the skeleton lines of the road present color features that highly represent the road color pattern.
  • Obviously, the selected pixels are only validated if they have been segmented as road pixels at the current iteration.
  • The adaptation process described in this section proves to be crucial in practice to keep the segmentation algorithm under stable performance upon illumination changing conditions and color varying asphalt.
  • The complete road tracking scheme is graphically summarised in the flow diagram depicted in Fig. 24.

2.8. Discussion of the Method

  • The global objective of this section is to put the road tracking algorithm under test in varied real circumstances.
  • As appreciated from observation of Fig. 25, the road edges can be neatly distinguished in the segmented image, allowing a clear estimation in real experiments.
  • The results obtained in presence of other vehicles parked on the left hand side of the road are illustrated in Fig. 27(b).
  • All pixels in the image tend to have similar intensity values, and thus, color differences in the HSI chromatic plane become crucial for segmentation purposes.
  • This situation derives in the appearance of dark spots on the road due to wet areas, as depicted in Fig. 31.

3. Implementation and Results

  • The complete navigation system described in the previous sections has been implemented on the so-called Babieca prototype vehicle, depicted in Fig. 1, that has been modified to allow for automatic velocity and steering control at a maximum speed of 90 km/h, using the non linear control law developed in Sotelo (2001).
  • As stated in Section 1, a live demonstration exhibiting the system capabilities on autonomous driving was also carried out during the IEEE Conference on Intelligent Vehicles 2002, in a private circuit located at Satory , France.
  • In order to complete the graphical results depicted in the previous sections, and to illustrate the global behavior of the complete navigation system implemented on Babieca, some general results are shown next.
  • During the tests, the reference vehicle velocity is assumed to be kept constant by the velocity controller.
  • To complete these results a wide set of video files demonstrating the operational performance of the system in real tests can be retrieved from ftp://www. depeca.uah.es/pub/vision.

4. Conclusions

  • The road segmentation algorithm based on the HSI color space and 2D-spatial constraints, as described in this work, has successfully proved to provide correct estimations for the edges and width of non-structured roads, i.e., roads without lane markers.
  • The practical results discussed above also support the validity of the method for different environmental and weather conditions, as demonstrated so far.
  • Otherwise, there will be long shadows but the system performs well.
  • The most remarkable feature of the road tracking scheme described in this work is its ability to correctly deal with non-structured roads by performing a non-supervised color-based road segmentation process.
  • Nonetheless, a lot of work still remains to be done until a completely robust and reliable autonomous system can be fully deployed in real conditions.

Did you find this useful? Give us your feedback

Figures (35)

Content maybe subject to copyright    Report

Autonomous Robots 16, 95–116, 2004
c
2004 Kluwer Academic Publishers. Manufactured in The Netherlands.
A Color Vision-Based Lane Tracking System for Autonomous Driving
on Unmarked Roads
MIGUEL ANGEL SOTELO AND FRANCISCO JAVIER RODRIGUEZ
Department of Electronics, University of Alcala, Alcal
´
adeHenares, Madrid, Spain
michael@depeca.uah.es
fjrs@depeca.uah.es
LUIS MAGDALENA
Department of Applied Mathematics, Technical University, Madrid, Spain
llayos@mat.upm.es
LUIS MIGUEL BERGASA AND LUCIANO BOQUETE
Department of Electronics, University of Alcala, Alcal
´
adeHenares, Madrid, Spain
bergasa@depeca.uah.es
luciano@depeca.uah.es
Abstract. This work describes a color Vision-based System intended to perform stable autonomous driving on
unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that en-
sures vehicle stability. Although this topic has already been documented in the technical literature by different
research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to as-
sisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the
BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing differ-
ent navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation sys-
tem demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination
conditions.
Keywords: color vision-based lane tracker, unmarked roads, unsupervised segmentation
1. Introduction
The main issue addressed in this work deals with the
design of a vision-based algorithm for autonomous ve-
hicle driving on unmarked roads.
1.1. Motivation for Autonomous Driving Systems
The deployment of Autonomous Driving Systems is a
challenging topic that has focused the interest of re-
search institutions all across the world since the mid
eighties. Apart from the obvious advantages related to
safety increase, such as accident rate reduction and hu-
man life savings, there are other benefits that could
clearly derive from automatic driving. Thus, on one
hand, vehicles keeping a short but reliable safety dis-
tance by automatic means allow to increase the capac-
ity of roads and highways. This inexorably leads to
an optimal use of infrastructures. On the other hand,
a remarkable saving in fuel expenses can be achieved
by automatically controlling vehicles velocity so as to
keep a soft acceleration profile. Likewise, automatic
cooperative driving of vehicle fleets involved in the

96 Sotelo et al.
transportation of heavy loads can lead to notable in-
dustrial cost reductions.
1.2. Autonomous Driving on Highways
and Extraurban Roads
Although the basic goal of this work is concerned with
the development of an Autonomous Driving System
for unmarked roads, the techniques deployed for lane
tracking in this kind of scenarios are similar to those
developed for road tracking in highways and structured
roads, as long as they face common problems. Nonethe-
less, most of the research groups currently working on
this topic focus their endeavors on autonomously navi-
gating vehicles on structured roads, i.e., marked roads.
This allows to reduce the navigation problem to the lo-
calization of lane markers painted on the road surface.
That’s the case of some well known and prestigious sys-
tems such as RALPH (Pomerleau and Jockem, 1996)
(Rapid Adapting Lateral Position Handler), developed
on the Navlab vehicle at the Robotics Institute of the
Carnegie Mellon University, the impressive unmanned
vehicles developed during the last decade by the re-
search groups at the UBM (Dickmanns et al., 1994;
Lutzeler and Dickmanns, 1998) and Daimler-Benz
(Franke et al., 1998), or the GOLD system (Bertozzi
and Broggi, 1998; Broggi et al., 1999) implemented
on the ARGO autonomous vehicle at the Universita di
Parma. All these systems have widely proved their va-
lidity on extensive tests carried out along thousand of
kilometers of autonomous driving on structured high-
ways and extraurban roads. The effectivity of these re-
sults on structured roads has led to the commercializa-
tion of some of these systems as driving aid products
that provide warning signals upon lane depart. Some
research groups have also undertaken the problem of
autonomous vision based navigation on completely un-
structured roads. Among them are the SCARF and
UNSCARF systems (Thorpe, 1990) designed to ex-
tract the road shape basing on the study of homoge-
neous regions from a color image. The ALVINN (Au-
tonomous Land Vehicle In a Neural Net) (Pomerleau,
1993) system is also able to follow unmarked roads af-
ter a proper training phase on the particular roads where
the vehicle must navigate. The group at the Universi-
tat der Bundeswehr, Munich, headed by E. Dickmanns
has also developed a remarkable number of works on
this topic since the early 80’s. Thus, autonomous guid-
ance of vehicles on either marked or unmarked roads
demonstrated its first results in Dickmanns and Zapp
(1986) and Dickmanns and Mysliwetz (1992) where
nine road and vehicle parameters were recursively es-
timated following the 4D approach on 3D scenes. More
recently, a combination of on- and off-road driving
was achieved in Gregor et al. (2001) using the EMS-
vision (Expectation-based Multifocal Saccadic vision)
system, showing its wide range of maneuvering capa-
bilities as described in Gregor et al. (2001). Likewise,
another similar system can be found in Lutzeler and
Dickmanns (2000) and Gregor et al. (2002), where a
real autonomous system for Intelligent Navigation in
a network of unmarked roads and intersections is de-
signed and implemented using edge detectors for lane
tracking. The vehicle is equipped with a four camera
vision system, and can be considered as the first com-
pletely autonomous vehicle capable to successfully
perform some kind of global mission in an urban-like
environment, also based on the EMS-vision system. On
the other hand, the work developed by the Department
of Electronics at the University of Alcala (UAH) in the
field of Autonomous Vehicle Driving started in 1993
with the design of a vision based algorithm for outdoor
environments (Rodriguez et al., 1998) that was imple-
mented on an industrial fork lift truck autonomously
operated on the campus of the UAH. After that, the de-
velopment of a vision-based system (Sotelo et al., 2001;
De Pedro et al., 2001) for Autonomous Vehicle Driving
on unmarked roads was undertaken until reaching the
results presented in this paper. The complete naviga-
tion system was implemented on BABIECA, an electric
Citroen Berlingo commercial prototype as depicted in
Fig. 1. The vehicle is equipped with a color camera, a
DGPS receiver, two computers, and the necessary elec-
tronic equipment to allow for automatic actuation on
the steering wheel, brake and acceleration pedals. Thus,
complete lateral and longitudinal automatic actuation
is issued during navigation. Real tests were carried out
on a private circuit emulating an urban quarter, com-
posed of streets and intersections (crossroads), located
at the Instituto de Autom
´
atica Industrial del CSIC in
Madrid, Spain. Additionally, a live demonstration ex-
hibiting the system capabilities on autonomous driving
was also carried out during the IEEE Conference on
Intelligent Vehicles 2002, in a private circuit located at
Satory (Versailles), France.
The work described in this paper is organized in the
following sections: Section 2 describes the color vision
based algorithm for lane tracking. Section 3 provides
some global results, and finally, concluding remarks
are presented in Section 4.

A Color Vision-Based Lane Tracking System 97
Figure 1. Babieca autonomous vehicle.
2. Lane Tracking
As described in the previous section, the main goal of
this work is to robustly track the lane of any kind of
road (structured or not). This includes the tracking of
non structured roads, i.e., roads without lane markers
painted on them.
2.1. Region of Interest
The original 480 ×512 incoming image acquired by
a color camera is in real time re-scaled to a low res-
olution 60 × 64 image, by making use of the system
hardware capabilities. It inevitably leads to a decrement
in pixel resolution that must necessarily be assessed.
Thus, the maximum resolution of direct measurements
is between 4 cm, at a distance of 10 m, and 8 cm at 20 m.
Nonetheless, the use of temporal filtering techniques
(as described in the following sections) allows to obtain
finer resolution estimations. As discussed in Bertozzi
et al. (2000) due to the existence of physical and conti-
nuity constraints derived from vehicle motion and road
design, the analysis of the whole image can be replaced
by the analysis of a specific portion of it, namely the
region of interest. In this region, the probability of find-
ing the most relevant road features is assured to be high
by making use of a priori knowledge on the road shape,
according to the parabolic road model proposed. Thus,
in most cases the region of interest is reduced to some
portion of image surrounding the road edges estimated
in the previous iteration of the algorithm. This is a valid
assumption for road tracking applications heavily rely-
ing on the detection of lane markers that represent the
road edges. This is not the case of the work presented
in this paper, as the main goal is to autonomously navi-
gate on completely unstructured roads (including rural
paths, etc). As will be later described, color and shape
features are the key characteristics used to distinguish
the road from the rest of elements in the image. This
leads to a slightly different concept of region of interest
where the complete road must be entirely contained in
the region under analysis.
On the other hand, the use of a narrow focus of at-
tention surrounding the previous road model is strongly
discarded due to the unstable behavior exhibited by the
segmentation process in practice (more detailed justifi-
cation will be given in the next sections). A rectangular
region of interest of 36 ×64 pixels covering the near-
est 20 m ahead of the vehicle is proposed instead, as
shown in Fig. 2. This restriction permits to remove non-
relevant elements from the image such as the sky, trees,
buildings, etc.
2.2. Road Features
The combined use of color and shape restrictions pro-
vides the essential information required to drive on
non structured roads. Prior to the segmentation of the

98 Sotelo et al.
Figure 2. Area of interest.
image, a proper selection of the most suitable color
space becomes an outstanding part of the process. On
one hand, the RGB color space has been extensively
tested and used in previous road tracking applications
on non-structured roads (Thorpe, 1990; Crisman and
Thorpe, 1991; Rodriguez et al., 1998). Nevertheless,
the use of the RGB color space has some well known
disadvantages, as mentioned next. It is non-intuitive
and non-uniform in color separation. This means that
two relatively close colors can be very separated in the
RGB color space. RGB components are slightly cor-
related. A color can not be imagined from its RGB
components. On the other hand, in some applications
the RGB color information is transformed into a differ-
ent color space where the luminance and chrominance
components of the color are clearly separated from each
other. This kind of representation benefits from the fact
that the color description model is quite oriented to hu-
man perception of colors. Additionally, in outdoor en-
vironments the change in luminance is very large due
to the unpredictable and uncontrollable weather con-
ditions, while the change in color or chrominance is
not that relevant. This makes highly recommendable
the use of a color space where a clear separation be-
tween intensity (luminance) and color (chrominance)
information can be established.
The HSI (Hue, Saturation and Intensity) color space
constitutes a good example of this kind of representa-
tion, as it permits to describe colors in terms that can be
intuitively understood. A human can easily recognize
basic color attributes: intensity (luminance or bright-
ness), hue or color, and saturation (Ikonomakis et al.,
2000). Hue represents the impression related to the pre-
dominant wavelength in the perceived color stimulus.
Saturation corresponds to the color relative purity, and
thus, non saturated colors are gray scale colors. Inten-
sity is the amount of light in a color. The maximum
intensity is perceived as pure white, while the mini-
mum intensity is pure black. Some of the most relevant
advantages related to the use of the HSI color space
are discussed below. It is closely related to human per-
ception of colors, having a high power to discriminate
colors, specially the hue component. The difference
between colors can be directly quantified by using a
distance measure. Transformation from the RGB color
space to the HSI color space can be made by means
of Eqs. (1) and (2), where V1 and V2 are intermediate
variables containing the chrominance information of
the color.
I
V
1
V
2
=
1
3
1
3
1
3
1
6
1
6
2
6
1
6
2
6
1
6
·
R
G
B
(1)
H = arctan
V
2
V
1
S =
V
2
1
+ V
2
2
(2)
This transformation describes a geometrical approx-
imation to map the RGB color cube into the HSI color
space, as depicted in Fig. 4. As can be clearly appreci-
ated from observation of Fig. 3, colors are distributed
in a cylindrical manner in the HSI color space. A sim-
ilar way to proceed is currently under consideration
by performing a change in the coordinate frames so as
to align with the I axis, and compute one component
along the I axis and the other in the plane normal to
the I axis. This could save some computing time by
avoiding going through the trigonometry.
Figure 3.Mapping from the RGB cube to the HSI color space.

A Color Vision-Based Lane Tracking System 99
Although the RGB color space has been success-
fully used in previous works dealing with road seg-
mentation (Thorpe, 1990; Rodriguez et al., 1998), the
HSI color space has exhibited superior performance
in image segmentation problems as demonstrated in
Ikonomakis et al. (2000). According to this, we pro-
pose the use of color features in the HSI color space as
the basis to perform the segmentation of non-structured
roads. A more detailed discussion supporting the use
of the HSI color space for image segmentation in
outdoor applications is extensively reported in Sotelo
(2001).
2.3. Road Model
The use of a road model eases the reconstruction of
the road geometry and permits to filter the data com-
puted during the features searching process. Among the
different possibilities found in the literature, models re-
laying on clothoids (Dickmanns et al., 1994) and poly-
nomial expressions have extensively exhibited high
performance in the field of road tracking. More con-
cretely, the use of parabolic functions to model the pro-
jection of the road edges onto the image plane has been
proposed and successfully tested in previous works
(Schneiderman and Nashman, 1994). Parabolic mod-
els do not allow inflection points (curvature changing
sign). This could lead to some problems in very snaky
appearance roads. Nonetheless, the use of parabolic
models has proved to suffice in practice for autonomous
driving on two different test tracks including bended
roads by using an appropriate lookahead distance as
described in Sotelo (2003). On the other hand, some of
the advantages derived from the use of a second order
polynomial model are described below.
Simplicity: a second order polynomial model has
only three adjustable coefficients.
Physical plausibility: in practice, any real stretch of
road can be reasonably approximated by a parabolic
function in the image plane. Discontinuities in the
road model are only encountered in road intersec-
tions and, particularly, in crossroads.
According to this, we’ve adopted the use of second
order polynomial functions for both the edges and the
center of the road (the skeleton lines will serve as a ref-
erence trajectory from which the steering angle com-
mand will be obtained), as depicted in Fig. 5.
The adjustable parameters of the several parabolic
functions are continuously updated at each iteration of
the algorithm using a well known least squares estima-
tor, as will be described later. Likewise, the road width
is estimated basing on the estimated road model under
the slowly varying width and flat terrain assumptions.
The joint use of a polynomial road model and the previ-
ously mentioned constraints allows for simple mapping
between the 2D image plane and the 3D real scene us-
ing one single camera.
2.4. Road Segmentation
Image segmentation must be carried out by exploit-
ing the cylindrical distribution of color features in the
HSI color space, bearing in mind that the separation
between road and no road color characteristics is non-
linear. To better understand the most appropriate dis-
tance measure that should be used in the road segmen-
tation problem consider again the decomposition of a
color vector into its three components in the HSI color
space, as illustrated in Fig. 4. According to the previ-
ous decomposition, the comparison between a pattern
pixel denoted by P
p
and any given pixel P
i
can be di-
rectly measured in terms of intensity and chrominance
distance, as depicted in Fig. 5.
From the analytical point of view, the difference
between two color vectors in the HSI space can be
Figure 4. Road model.
Figure 5. Color comparison in HSI space.

Citations
More filters
Book ChapterDOI
01 Nov 2014
TL;DR: A new method to detect pedestrian lanes that have no painted markers in indoor and outdoor scenes, under different illumination conditions is proposed and an improved method for vanishing point estimation is proposed, which employs local dominant orientations of edge pixels.
Abstract: Automatic lane detection is an essential component for autonomous navigation systems. It is a challenging task in unstructured environments where lanes vary significantly in appearance and are not indicated by painted markers. This paper proposes a new method to detect pedestrian lanes that have no painted markers in indoor and outdoor scenes, under different illumination conditions. Our method detects the walking lane using appearance and shape information. To cope with variations in lane surfaces, an appearance model of the lane region is learned on-the-fly. A sample region for learning the appearance model is automatically selected in the input image using the vanishing point. This paper also proposes an improved method for vanishing point estimation, which employs local dominant orientations of edge pixels. The proposed method is evaluated on a new data set of 1600 images collected from various indoor and outdoor scenes that contain unmarked pedestrian lanes with different types and surface patterns. Experimental results and comparisons with other existing methods on the new data set have demonstrated the efficiency and robustness of the proposed method.

17 citations


Cites methods from "A Color Vision-Based Lane Tracking ..."

  • ...employ the hue-saturation-intensity (HSI) color space [18]....

    [...]

  • ...In the lane segmentation approach, off-line color models are used for classifying the lane pixels from the background [6, 18, 19, 13]....

    [...]

Journal ArticleDOI
27 Apr 2017-Sensors
TL;DR: The thresholding strategy is proposed, which determines a coarse upper bound of the intensity for shadow which reduces false positives rates and is promising in terms of detection performance and robustness in day time under different weather conditions and cluttered scenarios.
Abstract: Vehicle detection is a fundamental task in Forward Collision Avoiding Systems (FACS). Generally, vision-based vehicle detection methods consist of two stages: hypotheses generation and hypotheses verification. In this paper, we focus on the former, presenting a feature-based method for on-road vehicle detection in urban traffic. Hypotheses for vehicle candidates are generated according to the shadow under the vehicles by comparing pixel properties across the vertical intensity gradients caused by shadows on the road, and followed by intensity thresholding and morphological discrimination. Unlike methods that identify the shadow under a vehicle as a road region with intensity smaller than a coarse lower bound of the intensity for road, the thresholding strategy we propose determines a coarse upper bound of the intensity for shadow which reduces false positives rates. The experimental results are promising in terms of detection performance and robustness in day time under different weather conditions and cluttered scenarios to enable validation for the first stage of a complete FACS.

17 citations


Cites background from "A Color Vision-Based Lane Tracking ..."

  • ...In [37,38] a pixel is considered achromatic if its intensity is below 10 or above 90, or if its normalized saturation is under 10, where the saturation and intensity values are normalized from 0 to 100....

    [...]

Journal ArticleDOI
TL;DR: A robust real-time road surface and semantic lane marker estimation algorithm using the deconvolution neural network and extra trees-based decision forest and multiple regression models indexed with scene labels is presented.
Abstract: In this article, we present a robust real-time road surface and semantic lane marker estimation algorithm using the deconvolution neural network and extra trees-based decision forest. Our proposed algorithm simultaneously performs three environment perception tasks on colour and depth images, even under challenging conditions, namely road surface estimation, lane marker localization, and lane marker semantic information estimation. The lane marker semantic information implies the lane marker type such as dotted lane marker or continuous lane marker. The task of road surface estimation is performed with a trained deconvolution neural network. For the lane marker localization task, a scene-based extra trees regression framework is used to localize the lane markers in the given road. To account for the variations in the number and characteristics of the lane markers in the road scene, multiple regression models indexed with scene labels are used. The pre-defined scene labels correspond to the lane marker variations in a given scene, and an extra trees-based classification model is trained to estimate them from the road features. The road features, given as an input to the extra trees frameworks, are extracted from the road image using the trained filters of the deconvolution network. The proposed algorithm is validated using multiple acquired datasets. A comparative analysis is also conducted with baseline algorithms, and an improved accuracy is reported. Moreover, a detailed parameter evaluation is also performed. We report a computational time of 90 ms per frame.

16 citations

Proceedings ArticleDOI
09 Nov 2010
TL;DR: A new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation and proposes a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.
Abstract: Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region.

15 citations


Cites background from "A Color Vision-Based Lane Tracking ..."

  • ...The performance of these systems is sometimes improved by including constraints such as temporal coherence [5], [6] or road shape restrictions [1] though they cannot solve the problem completely....

    [...]

  • ...Common vision–based algorithms for road detection adopt a bottom– up approach whereby low–level pixel or region properties such as color [1], [2] or texture [3], [4] are extracted and grouped according to some similarity measure....

    [...]

Proceedings ArticleDOI
03 Jun 2009
TL;DR: An off-line algorithm which makes good use of current and successive images to build reliable models of the color aspects of the road and of its environment at each vehicle position based on color segmentation which has interesting performance and which is stable along the sequence whatever its length is proposed.
Abstract: Long-range detection of road surface in a sequence of images from a front camera aboard a vehicle is known as an unsolved problem. We propose an algorithm using a single camera and based on color segmentation which has interesting performance and which is stable along the sequence whatever its length. It is an off-line algorithm which makes good use of current and successive images to build reliable models of the color aspects of the road and of its environment at each vehicle position. To apply the proposed algorithm a radiometric calibration step is required to ensure uniform responses of the pixels over the image. The algorithm consists in three steps: image smoothing consistent with perspective effect on the road, building of the models of the road and non-road colors, and region growing of the road region. The relevance of the proposed algorithm is illustrated by an application to the roadway visibility estimation in stereovision and its performance are illustrated by experiments in difficult situations.

14 citations


Additional excerpts

  • ...• search for features more discriminative than the RGB colors such as texture [6], depth in stereovision [7], color space robust to shadows [8], [9], • and model based approaches using the road shape geometry [10], [11], [12], [13]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety, allows to detect both generic obstacles and the lane position in a structured environment at a rate of 10 Hz.
Abstract: This paper describes the generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety. Based on a full-custom massively parallel hardware, it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings) at a rate of 10 Hz. Thanks to a geometrical transform supported by a specific hardware module, the perspective effect is removed from both left and right stereo images; the left is used to detect lane markings with a series of morphological filters, while both remapped stereo images are used for the detection of free-space in front of the vehicle. The output of the processing is displayed on both an on-board monitor and a control-panel to give visual feedbacks to the driver. The system was tested on the mobile laboratory (MOB-LAB) experimental land vehicle, which was driven for more than 3000 km along extra-urban roads and freeways at speeds up to 80 km/h, and demonstrated its robustness with respect to shadows and changing illumination conditions, different road textures, and vehicle movement.

1,088 citations


"A Color Vision-Based Lane Tracking ..." refers background or methods in this paper

  • ...Shadows and brightness on the road are admittedly the greatest difficulty in vision based systems operating in outdoor environments (Bertozzi and Broggi, 1998)....

    [...]

  • ...…during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al., 1998), or the GOLD system (Bertozzi and Broggi, 1998; Broggi et al., 1999) implemented on the ARGO autonomous vehicle at the Universita di Parma....

    [...]

  • ...Likewise, automatic cooperative driving of vehicle fleets involved in the transportation of heavy loads can lead to notable industrial cost reductions....

    [...]

Journal ArticleDOI
TL;DR: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools.
Abstract: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles. >

780 citations


"A Color Vision-Based Lane Tracking ..." refers methods in this paper

  • ...Although the RGB color space has been successfully used in previous works dealing with road segmentation ( Thorpe, 1990; Rodriguez et al., 1998), the HSI color space has exhibited superior performance in image segmentation problems as demonstrated in Ikonomakis et al. (2000)....

    [...]

  • ...On one hand, the RGB color space has been extensively tested and used in previous road tracking applications on non-structured roads ( Thorpe, 1990; Crisman and Thorpe, 1991; Rodriguez et al., 1998)....

    [...]

  • ...Among them are the SCARF and UNSCARF systems ( Thorpe, 1990 ) designed to extract the road shape basing on the study of homogeneous regions from a color image....

    [...]

Journal ArticleDOI
TL;DR: The general problem of recognizing both horizontal and vertical road curvature parameters while driving along the road has been solved recursively and a differential geometry representation decoupled for the two curvature components has been selected.
Abstract: The general problem of recognizing both horizontal and vertical road curvature parameters while driving along the road has been solved recursively. A differential geometry representation decoupled for the two curvature components has been selected. Based on the planar solution of E.D. Dickmanns and A. Zapp (1986) and its refinements, a simple spatio-temporal model of the driving process makes it possible to take both spatial and temporal constraints into account effectively. The estimation process determines nine road and vehicle state parameters recursively at 25 Hz (40 ms) using four Intel 80286 and one 386 microprocessors. Results with the test vehicle (VaMoRs), which is a 5-ton van, are given for a hilly country road. >

648 citations


"A Color Vision-Based Lane Tracking ..." refers background in this paper

  • ...Thus, autonomous guidance of vehicles on either marked or unmarked roads demonstrated its first results in Dickmanns and Zapp (1986) and Dickmanns and Mysliwetz (1992) where nine road and vehicle parameters were recursively estimated following the 4D approach on 3D scenes....

    [...]

Book
31 Jul 1993
TL;DR: This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes difficulties and can learn to control an autonomous van in under 5 minutes by watching a person drive.
Abstract: From the Publisher: Vision based mobile robot guidance has proven difficult for classical machine vision methods because of the diversity and real time constraints inherent in the task. This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task. But real world problems like vision based mobile robot guidance present a different set of challenges for the connectionist paradigm. Among them are: how to develop a general representation from a limited amount of real training data, how to understand the internal representations developed by artificial neural networks, how to estimate the reliability of individual networks, how to combine multiple networks trained for different situations into a single system, and how to combine connectionist perception with symbolic reasoning. Neural Network Perception for Mobile Robot Guidance presents novel solutions to each of these problems. Using these techniques, the ALVINN system can learn to control an autonomous van in under 5 minutes by watching a person drive. Once trained, individual ALVINN networks can drive in a variety of circumstances, including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 55 miles per hour. The techniques also are shown to generalize to the task of controlling the precise foot placement of a walking robot.

508 citations


"A Color Vision-Based Lane Tracking ..." refers background in this paper

  • ...On the other hand, the normalized blue component is generally predominant over the normalized red and green components, as discussed in Pomerleau (1993)....

    [...]

  • ...Likewise, automatic cooperative driving of vehicle fleets involved in the transportation of heavy loads can lead to notable industrial cost reductions....

    [...]

  • ...The ALVINN (Autonomous Land Vehicle In a Neural Net) (Pomerleau, 1993) system is also able to follow unmarked roads after a proper training phase on the particular roads where the vehicle must navigate....

    [...]

Journal ArticleDOI
TL;DR: The most common approaches to the challenging task of Autonomous Road Guidance are surveyed, with the most promising experimental solutions and prototypes developed worldwide using AI techniques to perceive the environmental situation by means of artificial vision.

448 citations


"A Color Vision-Based Lane Tracking ..." refers background in this paper

  • ...As discussed in Bertozzi et al. (2000) due to the existence of physical and continuity constraints derived from vehicle motion and road design, the analysis of the whole image can be replaced by the analysis of a specific portion of it, namely the region of interest....

    [...]

  • ...In order to deal with this problem, some authors propose to improve the dynamic range of visual cameras (Bertozzi et al., 2000) so as to tackle strong luminance changes, when entering or exiting tunnels for instance, or to enhance the sensitiveness of cameras to the blue component of colors....

    [...]

Frequently Asked Questions (1)
Q1. What have the authors contributed in "A color vision-based lane tracking system for autonomous driving on unmarked roads" ?

This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways.