scispace - formally typeset
Search or ask a question
Author

Nikolaos V. Boulgouris

Bio: Nikolaos V. Boulgouris is an academic researcher from Brunel University London. The author has contributed to research in topics: Watermark & Digital watermarking. The author has an hindex of 24, co-authored 120 publications receiving 2717 citations. Previous affiliations of Nikolaos V. Boulgouris include University of Toronto & King's College London.


Papers
More filters
Journal ArticleDOI
TL;DR: The gait analysis and recognition problem is exposed to the signal processing community and it will stimulates the involvement of more researchers in gait research in the future.
Abstract: This article provides an overview of the basic research directions in the field of gait analysis and recognition. The recent developments in gait research indicate that gait technologies still need to mature and that limited practical applications should be expected in the immediate future. At present, there is a potential for initial deployment of gait for recognition in conjunction with other biometrics. However, future advances in gait analysis and recognition - an open, challenging research area - are expected to result in wide deployment of gait technologies not only in surveillance, but in many other applications as well. This article exposes the gait analysis and recognition problem to the signal processing community and it will stimulates the involvement of more researchers in gait research in the future.

380 citations

Journal ArticleDOI
TL;DR: The transmission of JPEG2000 images over wireless channels is examined using reorganization of the compressed images into error-resilient, product-coded streams which are shown to outperform other algorithms which were recently proposed for the wireless transmission of images.
Abstract: The transmission of JPEG2000 images over wireless channels is examined using reorganization of the compressed images into error-resilient, product-coded streams. The product-code consists of Turbo-codes and Reed-Solomon codes which are optimized using an iterative process. The generation of the stream to be transmitted is performed directly using compressed JPEG2000 streams. The resulting scheme is tested for the transmission of compressed JPEG2000 images over wireless channels and is shown to outperform other algorithms which were recently proposed for the wireless transmission of images.

258 citations

Journal ArticleDOI
TL;DR: A new feature extraction process is proposed for gait representation and recognition based on the Radon transform of binary silhouettes, using a low-dimensional feature vector consisting of selected Radon template coefficients for each gait sequence.
Abstract: A new feature extraction process is proposed for gait representation and recognition. The new system is based on the Radon transform of binary silhouettes. For each gait sequence, the transformed silhouettes are used for the computation of a template. The set of all templates is subsequently subjected to linear discriminant analysis and subspace projection. In this manner, each gait sequence is described using a low-dimensional feature vector consisting of selected Radon template coefficients. Given a test feature vector, gait recognition and verification is achieved by appropriately comparing it to feature vectors in a reference gait database. By using the new system on the Gait Challenge database, very considerable improvements in recognition performance are seen in comparison to state-of-the-art methods for gait recognition

173 citations

Journal ArticleDOI
TL;DR: Experimental results of the application of the segmentation algorithm to known sequences demonstrate the efficiency of the proposed segmentation approach and reveal the potential of employing this segmentation algorithms as part of an object-based video indexing and retrieval scheme.
Abstract: In this paper, a novel algorithm is presented for the real-time, compressed-domain, unsupervised segmentation of image sequences and is applied to video indexing and retrieval. The segmentation algorithm uses motion and color information directly extracted from the MPEG-2 compressed stream. An iterative rejection scheme based on the bilinear motion model is used to effect foreground/background segmentation. Following that, meaningful foreground spatiotemporal objects are formed by initially examining the temporal consistency of the output of iterative rejection, clustering the resulting foreground macroblocks to connected regions and finally performing region tracking. Background segmentation to spatiotemporal objects is additionally performed. MPEG-7 compliant low-level descriptors describing the color, shape, position, and motion of the resulting spatiotemporal objects are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. This, combined with a relevance feedback mechanism, allows the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) and the retrieval of relevant video segments. Desired spatial and temporal relationships between the objects in multiple-keyword queries can also be expressed, using the shot ontology. Experimental results of the application of the segmentation algorithm to known sequences demonstrate the efficiency of the proposed segmentation approach. Sample queries reveal the potential of employing this segmentation algorithm as part of an object-based video indexing and retrieval scheme.

171 citations

Journal ArticleDOI
TL;DR: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling, and the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
Abstract: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

145 citations


Cited by
More filters
01 Jan 2006

3,012 citations

Journal ArticleDOI
TL;DR: In this paper, the authors offer a new book that enPDFd the perception of the visual world to read, which they call "Let's Read". But they do not discuss how to read it.
Abstract: Let's read! We will often find out this sentence everywhere. When still being a kid, mom used to order us to always read, so did the teacher. Some books are fully read in a week and we need the obligation to support reading. What about now? Do you still love reading? Is reading only for you who have obligation? Absolutely not! We here offer you a new book enPDFd the perception of the visual world to read.

2,250 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI

1,008 citations