




Did you find this useful? Give us your feedback
744 citations
..., CAVIAR(1), i-LIDS(2), ETISEO(3)), change detection [11], sports analytics (e....
[...]
680 citations
...Let us mention the 2014 handbook by Bouwmans et al [6] which, to our knowledge, is the most complete manuscript devoted to change detection recently published....
[...]
...We highlight strengths and weaknesses of these methods and identify remaining issues in change detection....
[...]
...An interesting finding is that methods appear to be complementary in nature: the best-performing methods can be beaten by combining several of them with a majority vote....
[...]
667 citations
..., CAVIAR2, i-LIDS 3, ETISEO4, change detection [22], sports analytics (e....
[...]
664 citations
...The recent background representation models can be classified in the following categories: advanced statistical background models, fuzzy background models, discriminative subspace learning models, RPCA models, sparse models and transform domain models....
[...]
639 citations
...Several initiatives have been established to promote tracking, such as PETS [95], CAVIAR61, i-LIDS 62, ETISEO63, CDC [25], CVBASE 64, FERET [67], LTDT 65, MOTC [44,76] and Videonet 66, and since 2013 short-term single target visual object tracking has been receiving a strong push toward performance evaluation standardisation from the VOT 60 initiative....
[...]
...Several initiatives have been established to promote tracking, such as PETS [95], CAVIAR(61), i-LIDS (62), ETISEO(63), CDC [25], CVBASE (64), FERET [67], LTDT (65), MOTC [44,76] and Videonet (66), and since 2013 short-term single target visual object tracking has been receiving a strong push toward performance evaluation standardisation from the VOT 60 initiative....
[...]
7,660 citations
...As a consequence, an overwhelming importance has been accorded to a small subset of easily implementable methods such as [23, 9, 26] that were developed in the late 1990’s....
[...]
4,280 citations
3,631 citations
...Two fairly old, but frequently-cited, methods: KDE-based estimation by Elgammal et al. [8] and GMM by Stauffer and Grimson [24], as well as 5 improved versions of these methods: self-adapting GMM by KaewTraKulPong [12], improved GMM by Zivkovic and van der Heijden [28], block-based GMM by Dora (RECTGAUSS-Tex) et al. [20], multi-level KDE by Nonaka et al. [17], and spatio-temporal KDE by Yoshinaga et al. [1] were also tested....
[...]
...[8] and GMM by Stauffer and Grimson [24], as well as 5 improved versions of these methods: self-adapting GMM by KaewTraKulPong [12], improved GMM by Zivkovic and van der Heijden [28], block-based GMM by Dora (RECTGAUSS-Tex) et al....
[...]
2,432 citations
...As a consequence, an overwhelming importance has been accorded to a small subset of easily implementable methods such as [23, 9, 26] that were developed in the late 1990’s....
[...]
1,971 citations
...• Wallflower [25]: This is a fairly well-known dataset that continues to be used today....
[...]
Examples include visual surveillance (people counting, crowd monitoring, action recognition, anomaly detection, forensic retrieval, etc.), smart environments (occupancy analysis, parking lot management, etc.), and content retrieval (video annotation, event detection, object tracking).
The success of the number 1 [10] method can be attributed to the use of a dynamiccontrol algorithm for automatically adapting thresholds and other parameter values.
since most change detection methods incur a delay before their background model stabilizes, the authors labeled the first few hundred frames of each video sequence as Non-ROI.
The videos have been obtained with different cameras ranging from low-resolution IP cameras, through mid-resolution camcorders and PTZ cameras, to thermal cameras.
Due to motion blur and partially-opaque objects (e.g., sparse bushes, dirty windows, fountains), pixels in these areas may contain both the moving object and background.
The authors would like to mention that although camouflage, caused by moving objects that have very similar color/texture to the background, is among the most glaring change detection issues, the authors have not created a camouflage category.
The CDnet undertaking aims to provide the research community with a rigorous and comprehensive scientific benchmarking facility, a rich dataset of videos, a set of utilities, and an access to author-approved algorithm implementations for testing and ranking of existing and new algorithms for motion and change detection.
Benezeth et al., 2010 [4] use a collection of 29 videos (15 camera-captured, 10 semi-synthetic, and 4 synthetic) taken from PETS 2001, the IBM dataset, and the VSSN 2006 dataset.