scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Color Vision-Based Lane Tracking System for Autonomous Driving on Unmarked Roads

01 Jan 2004-Autonomous Robots (Kluwer Academic Publishers)-Vol. 16, Iss: 1, pp 95-116
TL;DR: The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter.
Abstract: This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that ensures vehicle stability. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation system demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination conditions.

Summary (4 min read)

1.1. Motivation for Autonomous Driving Systems

  • The deployment of Autonomous Driving Systems is a challenging topic that has focused the interest of research institutions all across the world since the mid eighties.
  • Apart from the obvious advantages related to safety increase, such as accident rate reduction and human life savings, there are other benefits that could clearly derive from automatic driving.
  • Thus, on one hand, vehicles keeping a short but reliable safety distance by automatic means allow to increase the capacity of roads and highways.
  • Likewise, automatic cooperative driving of vehicle fleets involved in the transportation of heavy loads can lead to notable industrial cost reductions.

1.2. Autonomous Driving on Highways and Extraurban Roads

  • The techniques deployed for lane tracking in this kind of scenarios are similar to those developed for road tracking in highways and structured roads, as long as they face common problems.
  • The group at the Universitat der Bundeswehr, Munich, headed by E. Dickmanns has also developed a remarkable number of works on this topic since the early 80’s.
  • Likewise, another similar system can be found in Lutzeler and Dickmanns (2000) and Gregor et al. (2002), where a real autonomous system for Intelligent Navigation in a network of unmarked roads and intersections is designed and implemented using edge detectors for lane tracking.
  • The complete navigation system was implemented on BABIECA, an electric Citroen Berlingo commercial prototype as depicted in Fig.
  • Additionally, a live demonstration exhibiting the system capabilities on autonomous driving was also carried out during the IEEE Conference on Intelligent Vehicles 2002, in a private circuit located at Satory , France.

2.1. Region of Interest

  • Nonetheless, the use of temporal filtering techniques (as described in the following sections) allows to obtain finer resolution estimations.
  • The probability of finding the most relevant road features is assured to be high by making use of a priori knowledge on the road shape, according to the parabolic road model proposed.
  • Thus, in most cases the region of interest is reduced to some portion of image surrounding the road edges estimated in the previous iteration of the algorithm.
  • This is a valid assumption for road tracking applications heavily relying on the detection of lane markers that represent the road edges.
  • This restriction permits to remove nonrelevant elements from the image such as the sky, trees, buildings, etc.

2.2. Road Features

  • The combined use of color and shape restrictions provides the essential information required to drive on non structured roads.
  • This makes highly recommendable the use of a color space where a clear separation between intensity and color information can be established.
  • Hue represents the impression related to the predominant wavelength in the perceived color stimulus.
  • This could save some computing time by avoiding going through the trigonometry.
  • 1990; Rodriguez et al., 1998), the HSI color space has exhibited superior performance in image segmentation problems as demonstrated in Ikonomakis et al. (2000).

2.3. Road Model

  • The use of a road model eases the reconstruction of the road geometry and permits to filter the data computed during the features searching process.
  • More concretely, the use of parabolic functions to model the projection of the road edges onto the image plane has been proposed and successfully tested in previous works (Schneiderman and Nashman, 1994).
  • A second order polynomial model has only three adjustable coefficients, also known as Simplicity.
  • Discontinuities in the road model are only encountered in road intersections and, particularly, in crossroads.
  • The adjustable parameters of the several parabolic functions are continuously updated at each iteration of the algorithm using a well known least squares estimator, as will be described later.

2.4. Road Segmentation

  • Image segmentation must be carried out by exploiting the cylindrical distribution of color features in the HSI color space, bearing in mind that the separation between road and no road color characteristics is nonlinear.
  • According to this, pixels are divided into chromatic and achromatic as proposed in Ikonomakis et al. (2000).
  • This turns the segmentation stage into a position dependant process.
  • Thus, for pixels clearly located out of the road trajectory, the chromatic and luminance distances to the road pattern color features should be very small in order to effectively be segmented as part of the road.
  • Seven a priori road models are utilized for this purpose, as depicted in Fig.

2.5. Handling Shadows and Brightness

  • Shadows and brightness on the road are admittedly the greatest difficulty in vision based systems operating in outdoor environments (Bertozzi and Broggi, 1998).
  • B ≥ 1 3 I ≤ Iroad,avg − 2 · σroad (15) where b stands for the normalized blue component; Iroad,avg represents the average intensity value of all road pixels, and σroad is the standard deviation of the intensity distribution of road pixels.
  • This technique permits to enhance the road segmentation in presence of shadows, and remarkably contributes to improve the robustness of the color adaptation process, particularly in stretches of road largely covered by shadows.
  • Analytically the condition is formulated in Eq. (16).
  • The improvement achieved by attenuating both brightness and shadows, as described, permits to handle real images in real and complex situations with an extraordinary high performance, becoming an outstanding point of this work.

2.6. Estimation of Road Edges and Width

  • The estimation of the skeleton lines of the road and its edges is carried out basing on parabolic functions, as previously described.
  • These polynomial functions are the basis to obtain the lateral and orientation error of the vehicle with respect to the center of the lane.
  • On the other hand, ŷl(0), ŷc(0), ŷr (0) are the initial estimations for the left edge, right edge, and skeleton lines of the road, respectively, while yli , yri,yci stand for the left edge, right edge, and skeleton lines of basic pattern i .

2.6.2. Estimation of the Skeleton Lines of the Road.

  • The skeleton lines of the road at current time instant, ŷc(t), is estimated based on the segmented low resolution image and the previously estimated road trajectory, ŷc(t − 1).
  • Thus, the estimation is realized in three steps as described below.
  • The estimation of road edges is realized using the same filtering technique described in the previous section.
  • For each line in the area of interest, the closest measurements to the middle of the left edge validation area, defined by ŷc(t)−Ŵ (t −1)/2, and right edge validation area, defined by ŷc(t) +.
  • An individual road width measure wi is obtained for each line in the region of interest, by computing the difference between the left and right edges (ŷl(t)|x=xi and ŷr (t)|x=xi , respectively) as expressed in Eq. (20).

2.7. Road Color Features Update

  • After completing the road edges and width estimation process, the HSI color features of the road pattern are consequently updated so as to account for changes in road appearance and illumination.
  • Intuitively, pixels close to the skeleton lines of the road present color features that highly represent the road color pattern.
  • Obviously, the selected pixels are only validated if they have been segmented as road pixels at the current iteration.
  • The adaptation process described in this section proves to be crucial in practice to keep the segmentation algorithm under stable performance upon illumination changing conditions and color varying asphalt.
  • The complete road tracking scheme is graphically summarised in the flow diagram depicted in Fig. 24.

2.8. Discussion of the Method

  • The global objective of this section is to put the road tracking algorithm under test in varied real circumstances.
  • As appreciated from observation of Fig. 25, the road edges can be neatly distinguished in the segmented image, allowing a clear estimation in real experiments.
  • The results obtained in presence of other vehicles parked on the left hand side of the road are illustrated in Fig. 27(b).
  • All pixels in the image tend to have similar intensity values, and thus, color differences in the HSI chromatic plane become crucial for segmentation purposes.
  • This situation derives in the appearance of dark spots on the road due to wet areas, as depicted in Fig. 31.

3. Implementation and Results

  • The complete navigation system described in the previous sections has been implemented on the so-called Babieca prototype vehicle, depicted in Fig. 1, that has been modified to allow for automatic velocity and steering control at a maximum speed of 90 km/h, using the non linear control law developed in Sotelo (2001).
  • As stated in Section 1, a live demonstration exhibiting the system capabilities on autonomous driving was also carried out during the IEEE Conference on Intelligent Vehicles 2002, in a private circuit located at Satory , France.
  • In order to complete the graphical results depicted in the previous sections, and to illustrate the global behavior of the complete navigation system implemented on Babieca, some general results are shown next.
  • During the tests, the reference vehicle velocity is assumed to be kept constant by the velocity controller.
  • To complete these results a wide set of video files demonstrating the operational performance of the system in real tests can be retrieved from ftp://www. depeca.uah.es/pub/vision.

4. Conclusions

  • The road segmentation algorithm based on the HSI color space and 2D-spatial constraints, as described in this work, has successfully proved to provide correct estimations for the edges and width of non-structured roads, i.e., roads without lane markers.
  • The practical results discussed above also support the validity of the method for different environmental and weather conditions, as demonstrated so far.
  • Otherwise, there will be long shadows but the system performs well.
  • The most remarkable feature of the road tracking scheme described in this work is its ability to correctly deal with non-structured roads by performing a non-supervised color-based road segmentation process.
  • Nonetheless, a lot of work still remains to be done until a completely robust and reliable autonomous system can be fully deployed in real conditions.

Did you find this useful? Give us your feedback

Figures (35)

Content maybe subject to copyright    Report

Autonomous Robots 16, 95–116, 2004
c
2004 Kluwer Academic Publishers. Manufactured in The Netherlands.
A Color Vision-Based Lane Tracking System for Autonomous Driving
on Unmarked Roads
MIGUEL ANGEL SOTELO AND FRANCISCO JAVIER RODRIGUEZ
Department of Electronics, University of Alcala, Alcal
´
adeHenares, Madrid, Spain
michael@depeca.uah.es
fjrs@depeca.uah.es
LUIS MAGDALENA
Department of Applied Mathematics, Technical University, Madrid, Spain
llayos@mat.upm.es
LUIS MIGUEL BERGASA AND LUCIANO BOQUETE
Department of Electronics, University of Alcala, Alcal
´
adeHenares, Madrid, Spain
bergasa@depeca.uah.es
luciano@depeca.uah.es
Abstract. This work describes a color Vision-based System intended to perform stable autonomous driving on
unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that en-
sures vehicle stability. Although this topic has already been documented in the technical literature by different
research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to as-
sisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the
BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing differ-
ent navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation sys-
tem demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination
conditions.
Keywords: color vision-based lane tracker, unmarked roads, unsupervised segmentation
1. Introduction
The main issue addressed in this work deals with the
design of a vision-based algorithm for autonomous ve-
hicle driving on unmarked roads.
1.1. Motivation for Autonomous Driving Systems
The deployment of Autonomous Driving Systems is a
challenging topic that has focused the interest of re-
search institutions all across the world since the mid
eighties. Apart from the obvious advantages related to
safety increase, such as accident rate reduction and hu-
man life savings, there are other benefits that could
clearly derive from automatic driving. Thus, on one
hand, vehicles keeping a short but reliable safety dis-
tance by automatic means allow to increase the capac-
ity of roads and highways. This inexorably leads to
an optimal use of infrastructures. On the other hand,
a remarkable saving in fuel expenses can be achieved
by automatically controlling vehicles velocity so as to
keep a soft acceleration profile. Likewise, automatic
cooperative driving of vehicle fleets involved in the

96 Sotelo et al.
transportation of heavy loads can lead to notable in-
dustrial cost reductions.
1.2. Autonomous Driving on Highways
and Extraurban Roads
Although the basic goal of this work is concerned with
the development of an Autonomous Driving System
for unmarked roads, the techniques deployed for lane
tracking in this kind of scenarios are similar to those
developed for road tracking in highways and structured
roads, as long as they face common problems. Nonethe-
less, most of the research groups currently working on
this topic focus their endeavors on autonomously navi-
gating vehicles on structured roads, i.e., marked roads.
This allows to reduce the navigation problem to the lo-
calization of lane markers painted on the road surface.
That’s the case of some well known and prestigious sys-
tems such as RALPH (Pomerleau and Jockem, 1996)
(Rapid Adapting Lateral Position Handler), developed
on the Navlab vehicle at the Robotics Institute of the
Carnegie Mellon University, the impressive unmanned
vehicles developed during the last decade by the re-
search groups at the UBM (Dickmanns et al., 1994;
Lutzeler and Dickmanns, 1998) and Daimler-Benz
(Franke et al., 1998), or the GOLD system (Bertozzi
and Broggi, 1998; Broggi et al., 1999) implemented
on the ARGO autonomous vehicle at the Universita di
Parma. All these systems have widely proved their va-
lidity on extensive tests carried out along thousand of
kilometers of autonomous driving on structured high-
ways and extraurban roads. The effectivity of these re-
sults on structured roads has led to the commercializa-
tion of some of these systems as driving aid products
that provide warning signals upon lane depart. Some
research groups have also undertaken the problem of
autonomous vision based navigation on completely un-
structured roads. Among them are the SCARF and
UNSCARF systems (Thorpe, 1990) designed to ex-
tract the road shape basing on the study of homoge-
neous regions from a color image. The ALVINN (Au-
tonomous Land Vehicle In a Neural Net) (Pomerleau,
1993) system is also able to follow unmarked roads af-
ter a proper training phase on the particular roads where
the vehicle must navigate. The group at the Universi-
tat der Bundeswehr, Munich, headed by E. Dickmanns
has also developed a remarkable number of works on
this topic since the early 80’s. Thus, autonomous guid-
ance of vehicles on either marked or unmarked roads
demonstrated its first results in Dickmanns and Zapp
(1986) and Dickmanns and Mysliwetz (1992) where
nine road and vehicle parameters were recursively es-
timated following the 4D approach on 3D scenes. More
recently, a combination of on- and off-road driving
was achieved in Gregor et al. (2001) using the EMS-
vision (Expectation-based Multifocal Saccadic vision)
system, showing its wide range of maneuvering capa-
bilities as described in Gregor et al. (2001). Likewise,
another similar system can be found in Lutzeler and
Dickmanns (2000) and Gregor et al. (2002), where a
real autonomous system for Intelligent Navigation in
a network of unmarked roads and intersections is de-
signed and implemented using edge detectors for lane
tracking. The vehicle is equipped with a four camera
vision system, and can be considered as the first com-
pletely autonomous vehicle capable to successfully
perform some kind of global mission in an urban-like
environment, also based on the EMS-vision system. On
the other hand, the work developed by the Department
of Electronics at the University of Alcala (UAH) in the
field of Autonomous Vehicle Driving started in 1993
with the design of a vision based algorithm for outdoor
environments (Rodriguez et al., 1998) that was imple-
mented on an industrial fork lift truck autonomously
operated on the campus of the UAH. After that, the de-
velopment of a vision-based system (Sotelo et al., 2001;
De Pedro et al., 2001) for Autonomous Vehicle Driving
on unmarked roads was undertaken until reaching the
results presented in this paper. The complete naviga-
tion system was implemented on BABIECA, an electric
Citroen Berlingo commercial prototype as depicted in
Fig. 1. The vehicle is equipped with a color camera, a
DGPS receiver, two computers, and the necessary elec-
tronic equipment to allow for automatic actuation on
the steering wheel, brake and acceleration pedals. Thus,
complete lateral and longitudinal automatic actuation
is issued during navigation. Real tests were carried out
on a private circuit emulating an urban quarter, com-
posed of streets and intersections (crossroads), located
at the Instituto de Autom
´
atica Industrial del CSIC in
Madrid, Spain. Additionally, a live demonstration ex-
hibiting the system capabilities on autonomous driving
was also carried out during the IEEE Conference on
Intelligent Vehicles 2002, in a private circuit located at
Satory (Versailles), France.
The work described in this paper is organized in the
following sections: Section 2 describes the color vision
based algorithm for lane tracking. Section 3 provides
some global results, and finally, concluding remarks
are presented in Section 4.

A Color Vision-Based Lane Tracking System 97
Figure 1. Babieca autonomous vehicle.
2. Lane Tracking
As described in the previous section, the main goal of
this work is to robustly track the lane of any kind of
road (structured or not). This includes the tracking of
non structured roads, i.e., roads without lane markers
painted on them.
2.1. Region of Interest
The original 480 ×512 incoming image acquired by
a color camera is in real time re-scaled to a low res-
olution 60 × 64 image, by making use of the system
hardware capabilities. It inevitably leads to a decrement
in pixel resolution that must necessarily be assessed.
Thus, the maximum resolution of direct measurements
is between 4 cm, at a distance of 10 m, and 8 cm at 20 m.
Nonetheless, the use of temporal filtering techniques
(as described in the following sections) allows to obtain
finer resolution estimations. As discussed in Bertozzi
et al. (2000) due to the existence of physical and conti-
nuity constraints derived from vehicle motion and road
design, the analysis of the whole image can be replaced
by the analysis of a specific portion of it, namely the
region of interest. In this region, the probability of find-
ing the most relevant road features is assured to be high
by making use of a priori knowledge on the road shape,
according to the parabolic road model proposed. Thus,
in most cases the region of interest is reduced to some
portion of image surrounding the road edges estimated
in the previous iteration of the algorithm. This is a valid
assumption for road tracking applications heavily rely-
ing on the detection of lane markers that represent the
road edges. This is not the case of the work presented
in this paper, as the main goal is to autonomously navi-
gate on completely unstructured roads (including rural
paths, etc). As will be later described, color and shape
features are the key characteristics used to distinguish
the road from the rest of elements in the image. This
leads to a slightly different concept of region of interest
where the complete road must be entirely contained in
the region under analysis.
On the other hand, the use of a narrow focus of at-
tention surrounding the previous road model is strongly
discarded due to the unstable behavior exhibited by the
segmentation process in practice (more detailed justifi-
cation will be given in the next sections). A rectangular
region of interest of 36 ×64 pixels covering the near-
est 20 m ahead of the vehicle is proposed instead, as
shown in Fig. 2. This restriction permits to remove non-
relevant elements from the image such as the sky, trees,
buildings, etc.
2.2. Road Features
The combined use of color and shape restrictions pro-
vides the essential information required to drive on
non structured roads. Prior to the segmentation of the

98 Sotelo et al.
Figure 2. Area of interest.
image, a proper selection of the most suitable color
space becomes an outstanding part of the process. On
one hand, the RGB color space has been extensively
tested and used in previous road tracking applications
on non-structured roads (Thorpe, 1990; Crisman and
Thorpe, 1991; Rodriguez et al., 1998). Nevertheless,
the use of the RGB color space has some well known
disadvantages, as mentioned next. It is non-intuitive
and non-uniform in color separation. This means that
two relatively close colors can be very separated in the
RGB color space. RGB components are slightly cor-
related. A color can not be imagined from its RGB
components. On the other hand, in some applications
the RGB color information is transformed into a differ-
ent color space where the luminance and chrominance
components of the color are clearly separated from each
other. This kind of representation benefits from the fact
that the color description model is quite oriented to hu-
man perception of colors. Additionally, in outdoor en-
vironments the change in luminance is very large due
to the unpredictable and uncontrollable weather con-
ditions, while the change in color or chrominance is
not that relevant. This makes highly recommendable
the use of a color space where a clear separation be-
tween intensity (luminance) and color (chrominance)
information can be established.
The HSI (Hue, Saturation and Intensity) color space
constitutes a good example of this kind of representa-
tion, as it permits to describe colors in terms that can be
intuitively understood. A human can easily recognize
basic color attributes: intensity (luminance or bright-
ness), hue or color, and saturation (Ikonomakis et al.,
2000). Hue represents the impression related to the pre-
dominant wavelength in the perceived color stimulus.
Saturation corresponds to the color relative purity, and
thus, non saturated colors are gray scale colors. Inten-
sity is the amount of light in a color. The maximum
intensity is perceived as pure white, while the mini-
mum intensity is pure black. Some of the most relevant
advantages related to the use of the HSI color space
are discussed below. It is closely related to human per-
ception of colors, having a high power to discriminate
colors, specially the hue component. The difference
between colors can be directly quantified by using a
distance measure. Transformation from the RGB color
space to the HSI color space can be made by means
of Eqs. (1) and (2), where V1 and V2 are intermediate
variables containing the chrominance information of
the color.
I
V
1
V
2
=
1
3
1
3
1
3
1
6
1
6
2
6
1
6
2
6
1
6
·
R
G
B
(1)
H = arctan
V
2
V
1
S =
V
2
1
+ V
2
2
(2)
This transformation describes a geometrical approx-
imation to map the RGB color cube into the HSI color
space, as depicted in Fig. 4. As can be clearly appreci-
ated from observation of Fig. 3, colors are distributed
in a cylindrical manner in the HSI color space. A sim-
ilar way to proceed is currently under consideration
by performing a change in the coordinate frames so as
to align with the I axis, and compute one component
along the I axis and the other in the plane normal to
the I axis. This could save some computing time by
avoiding going through the trigonometry.
Figure 3.Mapping from the RGB cube to the HSI color space.

A Color Vision-Based Lane Tracking System 99
Although the RGB color space has been success-
fully used in previous works dealing with road seg-
mentation (Thorpe, 1990; Rodriguez et al., 1998), the
HSI color space has exhibited superior performance
in image segmentation problems as demonstrated in
Ikonomakis et al. (2000). According to this, we pro-
pose the use of color features in the HSI color space as
the basis to perform the segmentation of non-structured
roads. A more detailed discussion supporting the use
of the HSI color space for image segmentation in
outdoor applications is extensively reported in Sotelo
(2001).
2.3. Road Model
The use of a road model eases the reconstruction of
the road geometry and permits to filter the data com-
puted during the features searching process. Among the
different possibilities found in the literature, models re-
laying on clothoids (Dickmanns et al., 1994) and poly-
nomial expressions have extensively exhibited high
performance in the field of road tracking. More con-
cretely, the use of parabolic functions to model the pro-
jection of the road edges onto the image plane has been
proposed and successfully tested in previous works
(Schneiderman and Nashman, 1994). Parabolic mod-
els do not allow inflection points (curvature changing
sign). This could lead to some problems in very snaky
appearance roads. Nonetheless, the use of parabolic
models has proved to suffice in practice for autonomous
driving on two different test tracks including bended
roads by using an appropriate lookahead distance as
described in Sotelo (2003). On the other hand, some of
the advantages derived from the use of a second order
polynomial model are described below.
Simplicity: a second order polynomial model has
only three adjustable coefficients.
Physical plausibility: in practice, any real stretch of
road can be reasonably approximated by a parabolic
function in the image plane. Discontinuities in the
road model are only encountered in road intersec-
tions and, particularly, in crossroads.
According to this, we’ve adopted the use of second
order polynomial functions for both the edges and the
center of the road (the skeleton lines will serve as a ref-
erence trajectory from which the steering angle com-
mand will be obtained), as depicted in Fig. 5.
The adjustable parameters of the several parabolic
functions are continuously updated at each iteration of
the algorithm using a well known least squares estima-
tor, as will be described later. Likewise, the road width
is estimated basing on the estimated road model under
the slowly varying width and flat terrain assumptions.
The joint use of a polynomial road model and the previ-
ously mentioned constraints allows for simple mapping
between the 2D image plane and the 3D real scene us-
ing one single camera.
2.4. Road Segmentation
Image segmentation must be carried out by exploit-
ing the cylindrical distribution of color features in the
HSI color space, bearing in mind that the separation
between road and no road color characteristics is non-
linear. To better understand the most appropriate dis-
tance measure that should be used in the road segmen-
tation problem consider again the decomposition of a
color vector into its three components in the HSI color
space, as illustrated in Fig. 4. According to the previ-
ous decomposition, the comparison between a pattern
pixel denoted by P
p
and any given pixel P
i
can be di-
rectly measured in terms of intensity and chrominance
distance, as depicted in Fig. 5.
From the analytical point of view, the difference
between two color vectors in the HSI space can be
Figure 4. Road model.
Figure 5. Color comparison in HSI space.

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a shadow-invariant feature space combined with a model-based classifier is used to detect the free road surface ahead of the ego-vehicle.
Abstract: By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.

327 citations


Cites background or methods from "A Color Vision-Based Lane Tracking ..."

  • ...Thus, additional constraints must be included to improve their performance, e.g., in [ 13 ], Sotelo et al. use road shape restrictions and ad hoc postprocessing to recover undetected shadowed areas....

    [...]

  • ...For comparison, the image sequence SRD is processed using an HSI-based road-detection algorithm inspired in [7] and [ 13 ]....

    [...]

  • ...Other works in the field [6], [ 13 ] follow similar assumptions....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a real-time and illumination invariant lane detection method for lane departure warning system that works well in various illumination conditions such as in bad weather conditions and at night time.
Abstract: Invariant property of lane color under various illuminations is utilized for lane detection.Computational complexity is reduced using vanishing point detection and adaptive ROI.Datasets for evaluation include various environments from several devices.Simulation demo demonstrate fast and powerful performance for real-time applications. Lane detection is an important element in improving driving safety. In this paper, we propose a real-time and illumination invariant lane detection method for lane departure warning system. The proposed method works well in various illumination conditions such as in bad weather conditions and at night time. It includes three major components: First, we detect a vanishing point based on a voting map and define an adaptive region of interest (ROI) to reduce computational complexity. Second, we utilize the distinct property of lane colors to achieve illumination invariant lane marker candidate detection. Finally, we find the main lane using a clustering method from the lane marker candidates. In case of lane departure situation, our system sends driver alarm signal. Experimental results show satisfactory performance with an average detection rate of 93% under various illumination conditions. Moreover, the overall process takes only 33ms per frame.

194 citations

Journal ArticleDOI
TL;DR: A components-based learning approach is proposed in order to better deal with pedestrian variability, illumination conditions, partial occlusions, and rotations and suggest a combination of feature extraction methods as an essential clue for enhanced detection performance.
Abstract: This paper describes a comprehensive combination of feature extraction methods for vision-based pedestrian detection in Intelligent Transportation Systems. The basic components of pedestrians are first located in the image and then combined with a support-vector-machine-based classifier. This poses the problem of pedestrian detection in real cluttered road images. Candidate pedestrians are located using a subtractive clustering attention mechanism based on stereo vision. A components-based learning approach is proposed in order to better deal with pedestrian variability, illumination conditions, partial occlusions, and rotations. Extensive comparisons have been carried out using different feature extraction methods as a key to image understanding in real traffic conditions. A database containing thousands of pedestrian samples extracted from real traffic images has been created for learning purposes at either daytime or nighttime. The results achieved to date show interesting conclusions that suggest a combination of feature extraction methods as an essential clue for enhanced detection performance

184 citations

Proceedings ArticleDOI
13 Jun 2010
TL;DR: The low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences and the proposed method provides highest road detection accuracy when compared to state–of–the–art methods.
Abstract: Vision–based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision–based road detection methods are usually based on low–level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low–level cues. Low–level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state–of–the–art methods.

109 citations


Cites background or methods from "A Color Vision-Based Lane Tracking ..."

  • ...The last two algorithms are photometric–based approaches: the illuminant–invariant algorithm in [18] and the HSI based algorithm proposed in [1] and used in [5]....

    [...]

  • ...The two most popular color spaces, that have proved to be robust to minor illuminant changes, are HSV [1, 5] and normalized RGB [2]....

    [...]

  • ...In general, vision–based methods use low-level features for road detection [1, 2, 3, 4]....

    [...]

  • ...Algorithms based on transformed color spaces (i.e., HSV [5, 1] or rg [2]) are, to a certain degree still sensitive to lighting variations such as strong shadows and highlights....

    [...]

  • ..., HSV [5, 1] or rg [2]) are, to a certain degree still sensitive to lighting variations such as strong shadows and highlights....

    [...]

Proceedings ArticleDOI
10 Jun 2018
TL;DR: It is shown that AVR is feasible using off-the-shelf wireless technologies, and it can qualitatively change the decisions made by autonomous vehicle path planning algorithms.
Abstract: Autonomous vehicle prototypes today come with line-of-sight depth perception sensors like 3D cameras. These 3D sensors are used for improving vehicular safety in autonomous driving, but have fundamentally limited visibility due to occlusions, sensing range, and extreme weather and lighting conditions. To improve visibility and performance, not just for autonomous vehicles but for other Advanced Driving Assistance Systems (ADAS), we explore a capability called Augmented Vehicular Reality (AVR). AVR broadens the vehicle's visual horizon by enabling it to wirelessly share visual information with other nearby vehicles, but requires the design of novel relative positioning techniques, new perspective transformation methods, approaches to isolate and predict the motion of dynamic objects in order to hide latency, and adaptive transmission strategies to cope with wireless bandwidth variability. We show that AVR is feasible using off-the-shelf wireless technologies, and it can qualitatively change the decisions made by autonomous vehicle path planning algorithms. Our AVR prototype achieves positioning accuracies that are within a few percent of car lengths and lane widths, and is optimized to process frames at 30fps.

93 citations

References
More filters
Journal ArticleDOI
01 Jun 1987
TL;DR: By reading vision and navigation the carnegie mellon navlab, you can take more advantages with limited budget.
Abstract: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles. >

445 citations


"A Color Vision-Based Lane Tracking ..." refers background or methods in this paper

  • ...On one hand, the RGB color space has been extensively tested and used in previous road tracking applications on non-structured roads (Thorpe, 1990; Crisman and Thorpe, 1991; Rodriguez et al., 1998)....

    [...]

  • ...Although the RGB color space has been successfully used in previous works dealing with road segmentation (Thorpe, 1990; Rodriguez et al., 1998), the HSI color space has exhibited superior performance in image segmentation problems as demonstrated in Ikonomakis et al. (2000)....

    [...]

  • ...Likewise, automatic cooperative driving of vehicle fleets involved in the transportation of heavy loads can lead to notable industrial cost reductions....

    [...]

  • ...Among them are the SCARF and UNSCARF systems (Thorpe, 1990) designed to extract the road shape basing on the study of homogeneous regions from a color image....

    [...]

Journal ArticleDOI
TL;DR: The authors introduce their intelligent Stop&Go system and discuss appropriate algorithms and approaches for vision-module control.
Abstract: Most computer-vision systems for vehicle guidance are for highway scenarios. Developing autonomous or driver-assistance systems for complex urban traffic poses new algorithmic and system-architecture challenges. To address these issues, the authors introduce their intelligent Stop&Go system and discuss appropriate algorithms and approaches for vision-module control.

432 citations


"A Color Vision-Based Lane Tracking ..." refers background or methods in this paper

  • ...Likewise, automatic cooperative driving of vehicle fleets involved in the transportation of heavy loads can lead to notable industrial cost reductions....

    [...]

  • ...That’s the case of some well known and prestigious systems such as RALPH (Pomerleau and Jockem, 1996) (Rapid Adapting Lateral Position Handler), developed on the Navlab vehicle at the Robotics Institute of the Carnegie Mellon University, the impressive unmanned vehicles developed during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al., 1998), or the GOLD system (Bertozzi and Broggi, 1998; Broggi et al., 1999) implemented on the ARGO autonomous vehicle at the Universita di Parma....

    [...]

  • ...…vehicles developed during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al., 1998), or the GOLD system (Bertozzi and Broggi, 1998; Broggi et al., 1999) implemented on the ARGO autonomous vehicle at the…...

    [...]

Proceedings ArticleDOI
25 Feb 1987
TL;DR: An efficient method for guiding high speed land vehicles along roadways by computer vision has been developed and demonstrated with image sequence processing hardware in a real-time simulation loop 1.
Abstract: An efficient method for guiding high speed land vehicles along roadways by computer vision has been developed and demonstrated with image sequence processing hardware in a real-time simulation loop 1 . The approach is tailored to a well structured highway environment with good lanemarkings. Contour correlation and high order world models are the basic elements of the method, realised in a special multi-microprocessor (on board) computer system. Perspec-tive projection and dynamical models (Kalman filter) are used in an integrated approach for the design of the visual feedback control system. By determining road curvature explicity from the visual input, previously encountered steady state errors in curves are eliminated. The performance of the system will be demonstrated by a video film. The operation of the image sequence processing system has been tested on a typical Autobahn-scene at velocities up to 100 km/h.

332 citations


"A Color Vision-Based Lane Tracking ..." refers background in this paper

  • ...Thus, autonomous guidance of vehicles on either marked or unmarked roads demonstrated its first results in Dickmanns and Zapp (1986) and Dickmanns and Mysliwetz (1992) where nine road and vehicle parameters were recursively estimated following the 4D approach on 3D scenes....

    [...]

Journal ArticleDOI
TL;DR: The Ralph vision system helps automobile drivers steer, by sampling an image, assessing the road curvature, and determining the lateral offset of the vehicle relative to the lane center.
Abstract: The Ralph vision system helps automobile drivers steer, by sampling an image, assessing the road curvature, and determining the lateral offset of the vehicle relative to the lane center. Ralph has performed well under extensive tests, including a coast-to-coast, 2,850-mile drive.

324 citations


"A Color Vision-Based Lane Tracking ..." refers background in this paper

  • ...The last statement can be regarded as the slow varying road width assumption, widely used in previous works on road tracking (Kim et al., 1995; Pomerleau and Jockem, 1996)....

    [...]

  • ...On the other hand, a remarkable saving in fuel expenses can be achieved by automatically controlling vehicles velocity so as to keep a soft acceleration profile....

    [...]

  • ...That’s the case of some well known and prestigious systems such as RALPH (Pomerleau and Jockem, 1996) (Rapid Adapting Lateral Position Handler), developed on the Navlab vehicle at the Robotics Institute of the Carnegie Mellon University, the impressive unmanned vehicles developed during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al., 1998), or the GOLD system (Bertozzi and Broggi, 1998; Broggi et al., 1999) implemented on the ARGO autonomous vehicle at the Universita di Parma....

    [...]

  • ...That’s the case of some well known and prestigious systems such as RALPH (Pomerleau and Jockem, 1996) (Rapid Adapting Lateral Position Handler), developed on the Navlab vehicle at the Robotics Institute of the Carnegie Mellon University, the impressive unmanned vehicles developed during the last…...

    [...]

Proceedings ArticleDOI
24 Oct 1994
TL;DR: A passenger car Mercedes 500 SEL has been equipped with vision in the framework of the EUREKA-project 'Pometheus III', which allows an internal servo-maintained representation of the entire situation around the vehicle using the 4D approach to dynamic machine vision.
Abstract: A passenger car Mercedes 500 SEL has been equipped with vision in the framework of the EUREKA-project 'Pometheus III'. Road and object recognition is performed both in a look-ahead and in a look-back region; this allows an internal servo-maintained representation of the entire situation around the vehicle using the 4D approach to dynamic machine vision. Obstacles are detected and tracked both in the forward and in the backward viewing range up to about 100 meters distance; depending on the computing power available for this purpose up to 4 or 5 objects may be tracked in parallel in each hemisphere. A fixation type viewing direction control with the capability of saccadic shifts of viewing direction for attention focussing has been developed. The overall system comprises about 60 transputers of T-222 and T-800. Beside a PC as transputer host all other processors in VaMoRs-P are transputers. A description of the parallel processing architecture is given; the system integration follows the well proven paradigm of orientation towards 4D physical objects and expectations with prediction error feedback. This allows frequent data driven bottom-up and model driven top-down integration steps for efficient and robust object tracking.

244 citations


"A Color Vision-Based Lane Tracking ..." refers background or methods in this paper

  • ...…Institute of the Carnegie Mellon University, the impressive unmanned vehicles developed during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al., 1998), or the GOLD system (Bertozzi and Broggi, 1998; Broggi…...

    [...]

  • ...That’s the case of some well known and prestigious systems such as RALPH (Pomerleau and Jockem, 1996) (Rapid Adapting Lateral Position Handler), developed on the Navlab vehicle at the Robotics Institute of the Carnegie Mellon University, the impressive unmanned vehicles developed during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al....

    [...]

  • ...Among the different possibilities found in the literature, models relaying on clothoids (Dickmanns et al., 1994) and polynomial expressions have extensively exhibited high performance in the field of road tracking....

    [...]

  • ...That’s the case of some well known and prestigious systems such as RALPH (Pomerleau and Jockem, 1996) (Rapid Adapting Lateral Position Handler), developed on the Navlab vehicle at the Robotics Institute of the Carnegie Mellon University, the impressive unmanned vehicles developed during the last decade by the research groups at the UBM (Dickmanns et al., 1994; Lutzeler and Dickmanns, 1998) and Daimler-Benz (Franke et al., 1998), or the GOLD system (Bertozzi and Broggi, 1998; Broggi et al., 1999) implemented on the ARGO autonomous vehicle at the Universita di Parma....

    [...]

Frequently Asked Questions (1)
Q1. What have the authors contributed in "A color vision-based lane tracking system for autonomous driving on unmarked roads" ?

This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways.