Author
Thanh Tuan Nguyen
Other affiliations: Centre national de la recherche scientifique, University of the South, Toulon-Var, Ho Chi Minh City University of Technology
Bio: Thanh Tuan Nguyen is an academic researcher from Aix-Marseille University. The author has contributed to research in topics: Gaussian & Discriminative model. The author has an hindex of 6, co-authored 12 publications receiving 76 citations. Previous affiliations of Thanh Tuan Nguyen include Centre national de la recherche scientifique & University of the South, Toulon-Var.
Papers
More filters
TL;DR: A new framework, called Momental Directional Patterns, is presented, taking into account the advantages of filtering and local-feature-based approaches to form effective DT descriptors, motivated by convolutional neural networks.
Abstract: Understanding the chaotic motions of dynamic textures (DTs) is a challenging problem of video representation for different tasks in computer vision. This paper presents a new approach for an efficient DT representation by addressing the following novel concepts. First, a model of moment volumes is introduced as an effective pre-processing technique for enriching the robust and discriminative information of dynamic voxels with low computational cost. Second, two important extensions of Local Derivative Pattern operator are proposed to improve its performance in capturing directional features. Third, we present a new framework, called Momental Directional Patterns, taking into account the advantages of filtering and local-feature-based approaches to form effective DT descriptors. Furthermore, motivated by convolutional neural networks, the proposed framework is boosted by utilizing more global features extracted from max-pooling videos to improve the discrimination power of the descriptors. Our proposal is verified on benchmark datasets, i.e., UCLA, DynTex, and DynTex++, for DT classification issue. The experimental results substantiate the interest of our method.
18 citations
24 Sep 2018
TL;DR: An effective framework for dynamic texture recognition is introduced by exploiting local features and chaotic motions along beams of dense trajectories in which their motion points are encoded by using a new operator, named \(\mathrm {LVP}_{full}\text {-TOP}\), based on local vector patterns (LVP) in full-direction on three orthogonal planes.
Abstract: An effective framework for dynamic texture recognition is introduced by exploiting local features and chaotic motions along beams of dense trajectories in which their motion points are encoded by using a new operator, named \(\mathrm {LVP}_{full}\text {-TOP}\), based on local vector patterns (LVP) in full-direction on three orthogonal planes. Furthermore, we also exploit motion information from dense trajectories to boost the discriminative power of the proposed descriptor. Experiments on various benchmarks validate the interest of our approach.
18 citations
28 Nov 2017
TL;DR: This paper addresses a new dynamic texture operator by considering local structure patterns (LSP) and completed local binary patterns (CLBP) for static images in three orthogonal planes to capture spatial-temporal texture structures.
Abstract: Dynamic texture (DT) is a challenging problem in computer vision because of the chaotic motion of textures. We address in this paper a new dynamic texture operator by considering local structure patterns (LSP) and completed local binary patterns (CLBP) for static images in three orthogonal planes to capture spatial-temporal texture structures. Since the typical operator of local binary patterns (LBP), which uses center pixel for thresholding, has some limitations such as sensitivity to noise and near uniform regions, the proposed approach can deal with these drawbacks by using global and local texture information for adaptive thresholding and CLBP for exploiting complementary texture information in three orthogonal planes. Evaluations on different datasets of dynamic textures (UCLA, DynTex, DynTex++) show that our proposal significantly outperforms recent results in the state-of-the-art approaches.
17 citations
02 Sep 2019
TL;DR: An effective model, which jointly captures shape and motion cues, for dynamic texture (DT) description is introduced by taking into account advantages of volumes of blurred-invariant features in three main following stages.
Abstract: An effective model, which jointly captures shape and motion cues, for dynamic texture (DT) description is introduced by taking into account advantages of volumes of blurred-invariant features in three main following stages. First, a 3-dimensional Gaussian kernel is used to form smoothed sequences that allow to deal with well-known limitations of local encoding such as near uniform regions and sensitivity to noise. Second, a receptive volume of the Difference of Gaussians (DoG) is figured out to mitigate the negative impacts of environmental and illumination changes which are major challenges in DT understanding. Finally, a local encoding operator is addressed to construct a discriminative descriptor of enhancing patterns extracted from the filtered volumes. Evaluations on benchmark datasets (i.e., UCLA, DynTex, and DynTex++) for issue of DT classification have positively validated our crucial contributions.
14 citations
TL;DR: An efficient framework, called completed and statistical adaptive patterns on three orthogonal planes (CSAP-TOP), for representation of dynamic textures and scenes is addressed, which significantly outperforms recent results in the state-of-the-art.
Abstract: Dynamic texture (DT) and scene descriptions are challenging problems in video understanding that play crucial role in different important applications of computer vision. An efficient framework, called completed and statistical adaptive patterns on three orthogonal planes (CSAP-TOP), for representation of dynamic textures and scenes is addressed in this work. It utilizes adaptive thresholding applied in a completed schema on high-order moment images of three orthogonal planes. Some beneficial properties from its complementary features are inherited: taking into account information from local variations of magnitudes, robustness against noise and near-uniform regions, exploiting more useful and stable textural information at local and regional scales. In addition, we also consider the impact of high-order filtered images and adaptive thresholding in volume statistical adaptive patterns (VSAP) to investigate their influence and efficiency on describing DT and scene sequences. Evaluations of recognition on different datasets of dynamic textures and scenes (UCLA, DynTex, YUPENN, YUP++) show that our proposal, without utilizing sophisticated learning techniques, significantly outperforms recent results in the state-of-the-art.
14 citations
Cited by
More filters
TL;DR: A new framework, called Momental Directional Patterns, is presented, taking into account the advantages of filtering and local-feature-based approaches to form effective DT descriptors, motivated by convolutional neural networks.
Abstract: Understanding the chaotic motions of dynamic textures (DTs) is a challenging problem of video representation for different tasks in computer vision. This paper presents a new approach for an efficient DT representation by addressing the following novel concepts. First, a model of moment volumes is introduced as an effective pre-processing technique for enriching the robust and discriminative information of dynamic voxels with low computational cost. Second, two important extensions of Local Derivative Pattern operator are proposed to improve its performance in capturing directional features. Third, we present a new framework, called Momental Directional Patterns, taking into account the advantages of filtering and local-feature-based approaches to form effective DT descriptors. Furthermore, motivated by convolutional neural networks, the proposed framework is boosted by utilizing more global features extracted from max-pooling videos to improve the discrimination power of the descriptors. Our proposal is verified on benchmark datasets, i.e., UCLA, DynTex, and DynTex++, for DT classification issue. The experimental results substantiate the interest of our method.
18 citations
24 Sep 2018
TL;DR: An effective framework for dynamic texture recognition is introduced by exploiting local features and chaotic motions along beams of dense trajectories in which their motion points are encoded by using a new operator, named \(\mathrm {LVP}_{full}\text {-TOP}\), based on local vector patterns (LVP) in full-direction on three orthogonal planes.
Abstract: An effective framework for dynamic texture recognition is introduced by exploiting local features and chaotic motions along beams of dense trajectories in which their motion points are encoded by using a new operator, named \(\mathrm {LVP}_{full}\text {-TOP}\), based on local vector patterns (LVP) in full-direction on three orthogonal planes. Furthermore, we also exploit motion information from dense trajectories to boost the discriminative power of the proposed descriptor. Experiments on various benchmarks validate the interest of our approach.
18 citations
02 Sep 2019
TL;DR: An effective model, which jointly captures shape and motion cues, for dynamic texture (DT) description is introduced by taking into account advantages of volumes of blurred-invariant features in three main following stages.
Abstract: An effective model, which jointly captures shape and motion cues, for dynamic texture (DT) description is introduced by taking into account advantages of volumes of blurred-invariant features in three main following stages. First, a 3-dimensional Gaussian kernel is used to form smoothed sequences that allow to deal with well-known limitations of local encoding such as near uniform regions and sensitivity to noise. Second, a receptive volume of the Difference of Gaussians (DoG) is figured out to mitigate the negative impacts of environmental and illumination changes which are major challenges in DT understanding. Finally, a local encoding operator is addressed to construct a discriminative descriptor of enhancing patterns extracted from the filtered volumes. Evaluations on benchmark datasets (i.e., UCLA, DynTex, and DynTex++) for issue of DT classification have positively validated our crucial contributions.
14 citations
TL;DR: A general framework for image-based fire detection systems that exceeds previous studies and reduces false alarm rates under various environments is proposed in this paper which works on realistic conditions.
Abstract: Fire is one of the mutable hazards that damage properties and destroy forests. Many researchers are involved in early warning systems, which considerably minimize the consequences of fire damage. However, many existing image-based fire detection systems can perform well in a particular field. A general framework is proposed in this paper which works on realistic conditions. This approach filters out image blocks based on thresholds of different temporal and spatial features, starting with dividing the image into blocks and extraction of flames blocks from image foreground and background, and candidates blocks are analyzed to identify local features of color, source immobility, and flame flickering. Each local feature filter resolves different false-positive fire cases. Filtered blocks are further analyzed by global analysis to extract flame texture and flame reflection in surrounding blocks. Sequences of successful detections are buffered by a decision alarm system to reduce errors due to external camera influences. Research algorithms have low computation time. Through a sequence of experiments, the result is consistent with the empirical evidence and shows that the detection rate of the proposed system exceeds previous studies and reduces false alarm rates under various environments.
14 citations
TL;DR: An efficient framework, called completed and statistical adaptive patterns on three orthogonal planes (CSAP-TOP), for representation of dynamic textures and scenes is addressed, which significantly outperforms recent results in the state-of-the-art.
Abstract: Dynamic texture (DT) and scene descriptions are challenging problems in video understanding that play crucial role in different important applications of computer vision. An efficient framework, called completed and statistical adaptive patterns on three orthogonal planes (CSAP-TOP), for representation of dynamic textures and scenes is addressed in this work. It utilizes adaptive thresholding applied in a completed schema on high-order moment images of three orthogonal planes. Some beneficial properties from its complementary features are inherited: taking into account information from local variations of magnitudes, robustness against noise and near-uniform regions, exploiting more useful and stable textural information at local and regional scales. In addition, we also consider the impact of high-order filtered images and adaptive thresholding in volume statistical adaptive patterns (VSAP) to investigate their influence and efficiency on describing DT and scene sequences. Evaluations of recognition on different datasets of dynamic textures and scenes (UCLA, DynTex, YUPENN, YUP++) show that our proposal, without utilizing sophisticated learning techniques, significantly outperforms recent results in the state-of-the-art.
14 citations