scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Research on Vehicle Detection and Tracking Algorithm Based on the Methods of Frame Difference and Adaptive Background Subtraction Difference

20 Nov 2016-pp 134-140
TL;DR: Experimental result shows that the improving algorithm can extract all moving objects, which was endowed with strong background adaptability and better real-time performance.
Abstract: This paper proposed methods of vehicle detection and tracking algorithm in real-time traffic. In the detection of realtime moving vehicle, vehicle areas would be determined through road line detection. Then, the main color information of moving and non-moving area would be obtained through frame difference. Filling the main color information in vehicle moving area would lead to a similar background image. At last, moving vehicles would be determined through adaptive Background Subtraction difference. In the tracking of moving vehicles, firstly, all characteristic corners can be got by using Harris detection. Then, all characteristic corner set in the separate moving area would be collected through cluster analysis. For each characteristic individual corner set can generate a circle embracing all characteristics, some problems like vehicle barrier could be analyzed by using the radius of characteristic circle. At last, conduct feature matching tracking by using the center of feature circle. Experimental result shows that the improving algorithm can extract all moving objects, which was endowed with strong background adaptability and better real-time performance. Keywords-target detection; frame difference method; background subtraction difference method; harris corner detection; clustering analysis

Content maybe subject to copyright    Report

Research on Vehicle Detection and Tracking
Algorithm Based on the Methods of Frame Difference
and Adaptive Background Subtraction Difference
Yiqin Cao
*
, Xiao Yun, Tao Zhong
and Xiaosheng Huang
School of Software, East China Jiaotong University, Nanchang, 330013, China
*
Corresponding author
Abstract—This paper proposed methods of vehicle detection and
tracking algorithm in real-time traffic. In the detection of real-
time moving vehicle, vehicle areas would be determined through
road line detection. Then, the main color information of moving
and non-moving area would be obtained through frame
difference. Filling the main color information in vehicle moving
area would lead to a similar background image. At last, moving
vehicles would be determined through adaptive Background
Subtraction difference. In the tracking of moving vehicles, firstly,
all characteristic corners can be got by using Harris detection.
Then, all characteristic corner set in the separate moving area
would be collected through cluster analysis. For each
characteristic individual corner set can generate a circle
embracing all characteristics, some problems like vehicle barrier
could be analyzed by using the radius of characteristic circle. At
last, conduct feature matching tracking by using the center of
feature circle. Experimental result shows that the improving
algorithm can extract all moving objects, which was endowed
with strong background adaptability and better real-time
performance.
Keywords-target detection; frame difference method;
background subtraction difference method; harris corner detection;
clustering analysis
I. INTRODUCTION
Video-based detection and tracking of moving vehicles has
been a hot spot in the study of computer vision. Detection and
tracking of moving vehicles is an important part of the
intelligent transportation system. The key techniques of that
includes vehicle detection, image pre-process, vehicle tracking,
identification and etc [1-2]. At present, the major approaches to
detect moving vehicles are inter-frame difference method,
Background Subtraction difference method, optical flow
method and etc [3-4].
Background Subtraction difference method [5-6] applies
the subtraction of current image sequence and the reference
background model to detect the target of moving vehicle,
which is fast and simple, but moving target can only be
detected by using the clear background image. When there is
some fluctuations of test conditions,such as illumination
change, the accuracy of detection will be influenced. Inter-
frame difference method [7-8] applies the difference of two or
three adjacent images to obtain the moving target area, which is
of strong adaptability and robustness to the moving target. But
when dealing with slow moving or stationary target, residual
detection may appear. The optical flow method [9-10] is to
detect the characteristics of every pixel in each frame of image
sequence. It has disadvantages of great amount of calculation,
poor real-time performance and poor anti-noise performance,
which imposes a negative influence on the detection. Based on
the integration of inter-frame difference and Background
Subtraction difference methods, the approach to deal with
background frame has been updated, but it doesn’t update for
the complete background image, which means the traditional
method doesn’t get the actual background image. It has a
negative effect on vehicle tracking. When a vehicle is in the
normal process of moving, it can be detected and tracked
ideally through traditional methods. But when there is a sudden
stop of the vehicle in the process of moving, traditional
methods will treat the vehicle as background information and
lost the target. When the stationary vehicle move again, it will
be regarded as a new target, thus this tracking method can not
meet to the requirements of tracking. Reference Wang etc.[10]
proposed a new algorithm to detect and track moving vehicle
which is a combination of frame difference and optical flow.
This algorithm applies the inter-frame difference method to
detect the moving object movement area, calculate the non-
zero light flow in difference map, and applies the optical flow
field to track moving targets, but this method needs a large
amount of calculation and has a complex calculation process.
Based on the Background Subtraction difference and the frame
difference methods, Reference Li G etc.[11] raised a new
method to detect the moving target. Aimed at tracking moving
target under the static background, It combines the accuracy in
monitoring object of Background Subtraction difference
method of with strong adaptability to the light of frame
difference method, which improves the stability, but this
method is not very ideal which may cause errors sometimes
when dealing with complex motion detection.
This paper proposes a method which obtains the driving
area of vehicle through road line detection, captures the main
color information of motion area and non-motion area of
moving vehicle through inter-frame difference method, fills the
motion area with major color information of non-motion area
to obtain an approximate background image and captures the
moving vehicle through Background Subtraction difference.
Then all feature corner points of the moving vehicle can be
obtained by Harris detection. The feature corner sets of all the
separated motion areas can be obtained by cluster analysis and
each feature corner set generates a feature circle containing all
feature points which can be applied to analyze the vehicle
2nd International Conference on Artificial Intelligence and Industrial Engineering (AIIE2016)
Copyright © 2016, the Authors. Published by Atlantis Press.
This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Advances in Intelligent Systems Research, volume 133
134

occlusion. The circle of the feature center can be used to track
feature. This method solves the problem of dynamic tracking
and occlusion issues of vehicles in complicated situations, and
improves the real - time performance and accuracy of vehicle
detection and tracking.
II. M
OVING VEHICLE DETECTION METHOD BASED ON
TECHNIQUES OF INTER-FRAME DIFFERENCE AND BACKGROUND
SUBTRACTION DIFFERENCE
Determine the target vehicles on the areas different from
road through analysis on all the color information of the road
according to actual road condition. In this paper, the method of
frame difference, Background Subtraction difference and color
information contrast were applied to detect the moving target
vehicles. First of all, we should detect road line on the first
frame of moving video and determine the vehicle running area
in accordance with the road line. Then, we can get the moving
area and non-moving area of the picture using frame difference
method and view the non-moving area as background area. For
the road information of moving area shares similar features, we
can quickly obtain an approximate background image by filling
the basic color information of the non-moving region into the
moving region in the image. Finally, the Background
Subtraction difference is used to determine the moving vehicle.
A. Determination of Vehicle Running Area
According to actual road condition, vehicle running area
has its own particularity that the main color information
between the road lines are basically the same with little nuance.
Verifying vehicle running areas can reduce the impact of
surrounding environment and greatly improve the accuracy of
detection. In this method the main color information of vehicle
running areas can be used to detect the moving vehicles.
Because light has a certain influence on road information,
and many trees and obstacles around roads would the light, it is
to some extent difficult to get road information from lines.
However, the greatest difference between road markings and
other obstacles is that the former is a straight line. Therefore, in
this paper, road line detection was carried out through Hough
transformation[12-13]. The main vehicle running areas can be
determined by connecting the lines. Firstly, the image should
be grayed out and the noise in the video information should be
removed by median filtering. Then, the edge is obtained by
edge detecting Prewitt operator. Finally, the line in the edge is
obtained by Hough algorithm. During filming the road, the
road line is generally shown in a slash, which can remove the
vertical and horizontal road lines to improve the accuracy of
road detection. The marking lines on the same road may be
divided into a plurality of shorter road marking lines at the time
of detection due to occlusion, breakage or detection sensitivity
and other reasons. The divided road marking lines can be
restored to the original whole one by fitting method to improve
the reliability of detection. Finally, the longest road marking
lines on both sides should be linked orderly so as to determine
the vehicle running area.
B. Quickly Obtaining Background through Frame
Subtraction
The traditional method to obtain the background is multi-
frame mean. In practice, the method can provide a more
realistic background model, but its real-time performance is
poor. This paper presents a method of frame difference to
quickly get the background model, which is simple and has
good real-time performance.
Firstly, determining the vehicle running area by using the
method of Section 1.1.on the assumption that the image of the
two consecutive frames are

yxI
t
,
and

yxI
t
,
1
respectively.
Then, processing the image of two successive frames at the
same time to obtain the actual trajectory of the vehicle and
getting the two resulted frames image of

yxI
t
,
and
yxI
t
,
1
.
Next, making a difference between the two images so as to
determine the running areas of the moving vehicle:
 
yxIyxIyxCarR
ttt
,,,
1
(1)
Thresholding the vehicle region and obtain a binary image:


else
CarThyxCarRif
yxBiCarR
t
t
0
|,|1
,
(2)
Formula
CarTh
means threshold value. The binary image
got through threshold value processing has defects of region
segmentation. The moving vehicle region
yxBiCarR
new
t
,
with
better connectivity is obtained by closed operation in the paper.
Then, process
yxI
t
,
according to the vehicle region to
determine whether it is a moving one. If it is, it will be set to 0.
Otherwise, it should be kept its original information
unchanged so as to obtain non-moving region of road area:

 
else
yxBiCarRifyxI
yxRrodR
new
tt
t
0
0,,
,
(3)
The gray information of the main road area
yxRrodR
t
,
in
the non-moving region is analyzed and the gray information of
the area is obtained by the color histogram according to the
color information of road ground. Then, the important gray
information
t
MainGray
can be obtained through weighted mean.
Finally, the main gray information is filled into the moving
area to complete the background acquisition:


elseyxI
yxBiCarRifMainGray
Bg
t
new
tt
t
,
1,
(4)
Compared with the traditional multi-frame difference
method, the frames difference method applied in this paper
does not need to acquire prior knowledge and has stronger real-
Advances in Intelligent Systems Research, volume 133
135

time performance. Compared with the Gaussian model, it is
more simple and efficient.
C. Self-Adaptive Background Subtraction Difference
Background Subtraction difference method[3-4] requires
first setting up the Gaussian background model as the
background image, then calculating the difference of pixel
brightness between current video sequence image and the
known background image and taking the absolute value. At
last, the pixels of current frame that differ from background
image will be obtained. The obtained video frame image is
divided into a moving area and background area. In the
obtained difference image, the low gray-value area
corresponds to the background of the current frame image and
the high gray-value area corresponds to the moving part. The
background area and the motion area will be distinguished by
setting the threshold value. The threshold value also
determines the sensitivity of the target detection. Selecting an
appropriate threshold value has a greater impact on the target
detection.
Background Subtraction difference can better obtain the
information of moving vehicles, but does not have the
robustness of illumination. The use of adaptive Background
Subtraction difference can better solve this problem. In this
paper, the background is updated adaptively based on the
parameter information obtained from the frame difference. So
that the background has better illumination robustness and can
detect the target vehicle better.
Based on influences of light on the ground information, it
has a great influence on road area in formula (3), and finally
on the Main Gray information. Therefore, it only needs to
observe the changes of MainGray to judge whether the
background needs to be renewed. If the change of the gray
scale is greater than a certain threshold value, the background
will be updated in accordance with formula (4), which can
solve the problem of illumination.
Assuming that the road region of the image of the t + 1
frame has been obtained in accordance with Formula (3), the
MainGray information of the region is
1t
MainGray
, which is
obtained through the color histogram. So, the background
adaptive updating method is:
otherwiseBg
MGThMainGrayMainGrayBg
Bg
new
t
maintmain
t
1
1
1
||
(5)
Here,
main
Bg
and
main
MainGray
are the background of the
first few frames for the Background Subtraction difference and
the main gray-scale information.
MGTh
is the threshold value
of gray-scale change. As long as the gray-scale change does
not exceed the threshold value range, the background does not
need to be updated. Otherwise, it should be updated. Then,
regain the new background of
new
t
Bg
1
through the methods of
1.2 (5) and arrive at
new
tmain
BgBg
1
and
1
tmain
MainGrayMainGray
.
Next, the background adaptive updating of the next frame is
performed by the formula (5).
The obtained background is assumed to be
yxBg
t
,
. The
Background Subtraction difference image of
yxBF
t
,
can be
obtained through the current frame being subtracted from the
background image.

 
otherwise
BFThyxBgyxIyxI
yxBF
ttt
t
0
|,,|,
,
(6)
Here,
BFTh
is threshold value of the moving area.
yxBF
t
,
is the moving area in the foreground.
III. A
FAST MATCHING METHOD
Point-to-point matching tracking and the occlusion of
image extraction in process of moving need to be overcome in
feature-based tracking algorithm. This paper raises a fast
matching method to solve those two issues. This method is
aimed to solve two issues. The first one is real-time issue. On
the basis of the particularity of vehicle appearance, a vehicle is
approximately rectangular or rhombus in video sequence. After
Harris corner detection[14], the center of gravity of the vehicle
can be obtained through analyzing detected corner set. In the
video sequence, because the position of the vehicle in each
frame doesn’t change too much, the tracking can be achieved
according to the nearest gravity point of vehicle in the next
frame of the image. Thus corner tracking can be converted to
the gravity point tracking. The second issue is block issue. This
paper proposes that minimum circle needs to be introduced,
whose center is the gravity center and which contains the
feature corner set and is called the feature circle. When a target
vehicle is in the process of moving, the distance between the
target vehicle and camera will be changed and the shape of
target vehicle will change too, so its feature circle radius will
be updated adaptively. When the target vehicle stops, the radius
will stay the same. When the target vehicle is far away from the
camera, the radius will become smaller and Vice versa.Through
calculating, the radius of the next feature circle can be
predicted and called the forecast radius. Through the
comparison between the forecast radius and the true radius, it
can be easier to recognize whether the vehicle is blocked.
When the vehicle occlusion happens, the original two motion
areas will merged into one and the real radius will suddenly
become larger. At this time, center of gravity can be predicted
through particle filtering[15]and maintain adaptive forecast
radius of the vehicle to track the vehicle.
A. Matching Method Based on the Center of Gravity
On account of the corners containing a large amount of
vehicle information, and information of corner points does not
change much in two consecutive frames when corner detection
is accurate,so the vehicle tracking can be achieved by corner
matching. Corner matching methods can be divided into two
branches [16]. One is the matching of the corresponding points,
by which the corner set of two consecutive images is obtained,
and then the similarity of the corner points is applied to
Advances in Intelligent Systems Research, volume 133
136

determine the correspondence between each corner point of
the two point sets. When a certain number of matching is
satisfied, the matching relationship of the two point sets can be
determined. The other is the corner set matching, which
doesn’t need the establishment of the corresponding corners,
and only need to calculate the distance between the corners set,
which means to do the match through the degree of similarities
of corner sets. Because the corner set matching method is
relatively sensitive to the rotation and scale transformation of
the object, this paper applies the corresponding point matching
algorithm. Although the corresponding point matching
algorithm is of a relatively high matching accuracy for objects
with simple shapes, the shape of the vehicle is more complex
than the ordinary objects, and the motion of the vehicle is
unpredictable. When a vehicle makes turns or gets blocked,
the original corner set may be changed after corner detection,
and the matching of corner points will be greatly disturbed.
And the corresponding point matching algorithm needs to
match every corner point in the corner set, which is inefficient
and can not meet the real-time requirement of tracking.
Inspired by the tracking of the corresponding points in the
region tracking, a gravity center matching algorithm is
proposed according to the vehicle shape information.
First, the gravity center of motion area can be obtained
through feature corner point, and can be pre-matched with the
gravity center of a frame in the image. The feature point set of
the No. t frame image can be obtained by the Harris corner
detection, the feature point set of i separate motion regions:

yxM
i
t
,
can be obtained by clustering, and the points of each
motion region:

yxC
i
t
,
can be obtained by analyzing those
feature point sets. According to the area of the moving vehicle
is gravity center with the shape of rectangular or diamond,
four points can be used to determine the center of gravity:



2
minmax xMxM
xC
i
t
i
t
i
t
(7)



2
minmax yMyM
yC
i
t
i
t
i
t
(8)
Through the above two equations, the gravity point
yxC
i
t
,
of the No. i moving area of the No. t frame image can be
obtained. Through the same method, the gravity enter
yxC
i
t
,
1
of each moving area of the next frame can be obtained. The
position of the moving vehicle in two consecutive frames will
not change too much. Thus the two nearest consecutive images
can be matched to select the shortest point to make the pre-
matching. Then the occlusion will be determined. By this
method, the matching of each feature corner, the complexity of
the operation and the computing time can be reduced and the
tracking effect can be improved.
B. Vehicle Occlusion Judgments
When the occlusion of the vehicle happens, the radius of
the feature circle generated by the feature corner is used to
judge the occlusion, and the final accurate matching will be
made. By using the feature point set

yxM
i
t
,
to obtain a
minimum circle with the center of the circle
yxC
i
t
,
, which
contains all the feature points in the motion region and is
called the feature circle. The radius of the circle is
i
t
R
. In the
video sequence, when the target vehicle is far away from the
camera, the radius will become smaller,and vice versa. The
radius of the feature circle
i
t
R
can be updated as:
i
t
i
t
RR
1
1
(9)
where
can be obtained through the experiment, and the
value is usually 1.15. When the vehicle is close to the camera,
the value is
1
, and the vehicle is far away from the camera,
the value is
1
.
i
t
R
is the radius of the feature circle through estimation, the
radius of the actual feature circle through analyzing is
i
t
R
,and
it is relatively simple to determine the occlusion and
separation by comparing the two radii. When the vehicle
occlusion occurs, the original two motion areas will be merged
into one, that is, the radius of the feature circle will suddenly
become very large. When the vehicle separation occurs, the
original block of the moving area will be separated into two
parts, that is, the radius of the feature circle will suddenly
become smaller. Specific judgments are as follows:
1) Normal driving:
rTh
R
RR
i
t
i
t
i
t
;
2) Vehicle occlusion:
rTh
R
RR
i
t
i
t
i
t
and
i
t
i
t
RR
;
3) Vehicle separation:
rTh
R
RR
i
t
i
t
i
t
and
i
t
i
t
RR
Wherein,
rTh
is the a radius decision threshold. When the
vehicle is the normal moving, the gravity center matching
method can be directly used. Tracking can be achieved by
matching the nearest points of

yxC
i
t
,
and

yxC
i
t
,
1
. When the
vehicle is blocked,
yxC
i
t
,
will be processed through the
particle filtering and the position of the next frame
yxC
i
t
,
1
will be substituted by
yxC
i
t
,
1
and the predicted radius is
used to substitute the actually obtained radius, i.e.,
i
t
i
t
RR
11
.
When the vehicle is separated, the predicted gravity center
when occlusion occurs is taken as the gravity center, and the
next frame is tracked through the optimized matching method.
IV. EXPERIMENTAL RESULTS AND DISCUSSION
Simulation results from MATLAB R2012b software
programming, computer hardware configuration is Intel Core
i5/3.50 GHz/4.00GB of memory. In the process of simulation,
inter-frame difference threshold is 0.156,gray threshold value
Advances in Intelligent Systems Research, volume 133
137

of 0.03,background Subtraction difference threshold is
0.15.The video resolution is 320×224 according to a highway
vehicle motion video which used as an experiment object.
This paper proposes a method which based on the fusion of
inter-frame difference and background subtraction difference.
First of all, the proposed method uses the Rough road line
detection for the first image. Secondly, it determines the main
driving area of the vehicle through the road line, then it
obtains two long road lines about two sides through the
experiment, next connect the two road routes in an orderly
manner, finally get the determined vehicle driving area.
This paper proposes a method which based on inter-frame
difference and background subtraction difference fusion. First
of all, use the Rough road line detection for the first image,
secondly, determine the main driving area of the vehicle
through the road line. Then obtained two long road lines about
two sides through the experiment, next connect the two road
routes in an orderly manner, finally get the determined vehicle
driving area. As shown in Figure I.
(a)The first frame image
(b)Vehicle driving area
FIGURE I. DETERMINE VEHICLE DRIVING AREA THROUGH THE
ROUGH ROAD LINE DETECTION
It is found that the vehicle driving area can remove the
influence of buildings and trees beside the road on the target
detection,and it is the base of the background model through
the inter-frame difference.
After vehicle running area is determined, it is assumed that
the moving area and the non-moving area of the image would
be obtained through conducting frame difference on the 13th
frame and the 14th frame image, as shown in Figure II (a). The
black part of vehicle running area is moving area, and the rest
is the non-moving area. Then, the color histogram is used to
analyze the color information of the non-moving area, as
shown in Figure II (b). The main gray information of non-
moving area can be obtained by taking a weighted average of
the gray value greater than 1000. And then, we can obtain the
background image by filling the gray information into the
moving area of the image, as shown in Figure II (c).
(a) non-moving area obtained through frame difference
(b) gray histogram of non-moving area
(c) Background image
(d) Background obtained through average frame
FIGURE II. THE BACKGROUND IMAGE OBTAINED THROUGH
INTER- FRAME DIFFERENCE
Figure II. c and Figure II. d are the background obtained
through the proposed method and traditional multi-frame
averaging method respectively. From the visual aspect,
although the background image obtained through frame
difference is less specific and smooth than that through
average, this algorithm is simpler and has better real-time
performance.
After the background image is obtained through the inter-
frame difference, the moving object can be detected by using
the background subtraction difference, as shown in Figure
III(b).Compare the traditional background subtraction
difference with the goal obtained by the method proposed in
this paper has more details about the vehicle information,
which has a good significance for describing the target as well
as rebuilding the vehicle model. It also provides a good basic
for the follow-up vehicle tracking.
Advances in Intelligent Systems Research, volume 133
138

Citations
More filters
Book ChapterDOI
16 Dec 2019
TL;DR: A detailed review of vehicle detection and classification techniques is presented and also discusses about different approaches detecting the vehicles in bad weather conditions and about the datasets used for evaluating the proposed techniques in various studies.
Abstract: Smart traffic and information systems require the collection of traffic data from respective sensors for regulation of traffic. In this regard, surveillance cameras have been installed in monitoring and control of traffic in the last few years. Several studies are carried out in video surveillance technologies using image processing techniques for traffic management. Video processing of a traffic data obtained through surveillance cameras is an instance of applications for advance cautioning or data extraction for real-time analysis of vehicles. This paper presents a detailed review of vehicle detection and classification techniques and also discusses about different approaches detecting the vehicles in bad weather conditions. It also discusses about the datasets used for evaluating the proposed techniques in various studies.

10 citations

Journal Article
TL;DR: A novel image retrieval method based on improved K-means algorithm was presented, which took the two feature vectors which the distance between them is the maximum in the database, and found all correct initial centroids, and clustered according to the initial class Centroids.
Abstract: Having analyzed the drawbacks of image retrieval based on K-means algorithm,a novel image retrieval method based on improved K-means algorithm was presented in this paper.Firstly,computed the distance of every two color histograms of all color histograms in the image feature database.Then,took the two feature vectors which the distance between them is the maximum in the database,as the first two initial centroids,and found all correct initial centroids,and clustered according to the initial class centroids.Finally,started image retrieval.Experimental results demonstrate that the proposed method is efficient.

1 citations

References
More filters
Journal ArticleDOI
01 Mar 2011
TL;DR: A vision-based system that, coupling them in a rule-based fashion, is able to detect and track vehicles and allows the generation of an interface that informs a driver of the relative distance and velocity of other vehicles in real time and triggers a warning when a potentially dangerous situation arises.
Abstract: Detecting car taillights at night is a task which can nowadays be accomplished very fast on cheap hardware. We rely on such detections to build a vision-based system that, coupling them in a rule-based fashion, is able to detect and track vehicles. This allows the generation of an interface that informs a driver of the relative distance and velocity of other vehicles in real time and triggers a warning when a potentially dangerous situation arises. We demonstrate the system using sequences shot using a camera mounted behind a car’s windshield.

55 citations


"Research on Vehicle Detection and T..." refers methods in this paper

  • ...The key techniques of that includes vehicle detection, image pre-process, vehicle tracking, identification and etc [1-2]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the L-shaped corner points were selectively recognized from all kinds of corner points in the detecting image by improving the traditional Harris corner detection algorithm, and the position accuracy of corners was promoted by sub-pixel post-processing.
Abstract: To recognize and detect a rectangle rapidly and accurately,an image collection system was established and a rapid detection algorithm for rectangles was proposed based on Harris corner detection algorithm.First,the L-shaped corner points were selectively recognized from all kinds of corner points in the detecting image by improving the traditional Harris corner detection algorithm,and the position accuracy of corner points was promoted by sub-pixel post-processing.Then,according to the obtained high-precision angular position information,some parallel straight line segment pairs with equal length were grouped and the perpendicular parallel line segment pairs with four overlap corner points were matched,by which the four sides of a rectangle were obtained.Furthermore,all the rectangle elements were detected in the processed image.In order to improve the accuracy and reliability of the rectangular recognition algorithm,the identifying criterion of the pseudo rectangular graphic elements was provided.Finally,the sensor performance testing experiments were carried out.Experimental results indicate that the rectangular recognition speed of Harris corner point algorithm is 8.5times faster than that of Hough algorithm,and the rectangular image recognition maximum position error is 0.4pixel.The Harris corner rectangle detection method has strong anti-interference capability and stability and can meet thehigh real-time and precision detection requirements of industrial application.

11 citations


"Research on Vehicle Detection and T..." refers methods in this paper

  • ...After Harris corner detection[14], the center of gravity of the vehicle can be obtained through analyzing detected corner set....

    [...]

Proceedings ArticleDOI
10 Aug 2010
TL;DR: The algorithm is simple and effective, with a small amount of calculation, which can locate the information of high-definition video vehicles more accurately, and achieve vehicle location and tracking, and meet the real-time processing requirement.
Abstract: The technology of high-definition video detection has the virtue of high resolution, which can clearly detect vehicle information and license plates, etc. In this paper, according to the characteristics of high-definition video detection technology, we propose a new vehicle location and tracking method, which is based on the brightness curve. Firstly, break up the image by regions, and then we can get the brightness curve of each lane by doing horizontal projection. Then do the background modeling to the brightness curve, thus the horizontal vehicle division is completed. Do adaptive edge detection to the regions divided by lanes, and then we can get the vertical location of vehicles by doing the vertical projection and adaptive filtering. After the completion of vehicle location, do prediction and tracking of vehicles using the algorithm presented in this paper which is based on brightness curve. Experiments show that the algorithm is simple and effective, with a small amount of calculation, which can locate the information of high-definition video vehicles more accurately, and achieve vehicle location and tracking. This algorithm can basically meet the real-time processing requirement.

6 citations


"Research on Vehicle Detection and T..." refers methods in this paper

  • ...Interframe difference method [7-8] applies the difference of two or three adjacent images to obtain the moving target area, which is of strong adaptability and robustness to the moving target....

    [...]

Proceedings ArticleDOI
01 Dec 2014
TL;DR: Experiments shows that proposed discrete wavelet transform based method has a high capability to detect and track non-rigid moving object, even when light intensities change abruptly.
Abstract: A robust, meticulous and high performance approach is still a great challenge in tracking approach. There are various difficulties in object tracking like noise in scene, illumination changes, occlusion effect, and pose variation into the scene. As an object moves, it changes its orientation relative to the light sources which illuminate it. An illumination variation causes tracking algorithm to lose the target in the scene. This paper presents a discrete wavelet transform based method of detecting and tracking moving object under varying illumination condition with a stationary camera. Discrete wavelet transform provides illumination invariant feature extraction method using gaussian smoothing function and thresholding. We have tested tracking results, on number of video sequences with indoor and outdoor environments and demonstrated the effectiveness of our proposed method. Experiments shows that proposed method has a high capability to detect and track non-rigid moving object, even when light intensities change abruptly.

3 citations


"Research on Vehicle Detection and T..." refers methods in this paper

  • ...At present, the major approaches to detect moving vehicles are inter-frame difference method, Background Subtraction difference method, optical flow method and etc [3-4]....

    [...]

  • ...Background Subtraction difference method[3-4] requires first setting up the Gaussian background model as the background image, then calculating the difference of pixel brightness between current video sequence image and the known background image and taking the absolute value....

    [...]

Journal Article
TL;DR: The experimental results confirm that the motive target can be detected effectively through the analysis of the IR detector work states and the intelligent transform of motion detection algorithms.
Abstract: In order to detect the motive target in the complex infrared image sequences obtained by IR detector under different work states,a motion detection combination algorithm was proposed using image subtraction and Lucas Kanade optical flow method.According to the work states of IR detector,the relative motion relationship was established between the IR detector and the target.Image difference method and Lucas Kanada optical flow method were adopted respectively to detect the target motion,when the IR detector was still or motive.The emulation and analysis were carried out for the infrared image sequences obtained by IR detectors.The experimental results confirm that the motive target can be detected effectively through the analysis of the IR detector work states and the intelligent transform of motion detection algorithms.

3 citations


"Research on Vehicle Detection and T..." refers methods in this paper

  • ...The optical flow method [9-10] is to detect the characteristics of every pixel in each frame of image sequence....

    [...]