scispace - formally typeset
Search or ask a question

Showing papers by "Ioannis Pitas published in 1996"


Proceedings ArticleDOI
16 Sep 1996
TL;DR: A method for casting digital watermarks on images and its effectiveness is proposed and immunity to subsampling is examined and simulation results are provided for the verification of the above mentioned topics.
Abstract: Signature (watermark) casting on digital images is an important problem, since it affects many aspects of the information market. We propose a method for casting digital watermarks on images and we analyze its effectiveness. The satisfaction of some basic demands in this area is examined and a method for producing digital watermarks is proposed. Moreover, immunity to subsampling is examined and simulation results are provided for the verification of the above mentioned topics.

305 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: The algorithms proposed select certain blocks in the image based on a Gaussian network classifier such that their discrete cosine transform (DCT) coefficients fulfil a constraint imposed by the watermark code.
Abstract: Watermarking algorithms are used for image copyright protection. The algorithms proposed select certain blocks in the image based on a Gaussian network classifier. The pixel values of the selected blocks are modified such that their discrete cosine transform (DCT) coefficients fulfil a constraint imposed by the watermark code. Two different constraints are considered. The first approach consists of embedding a linear constraint among selected DCT coefficients and the second one defines circular detection regions in the DCT domain. A rule for generating the DCT parameters of distinct watermarks is provided. The watermarks embedded by the proposed algorithms are resistant to JPEG compression.

283 citations


Proceedings ArticleDOI
07 May 1996
TL;DR: A method for copyright protection of digital images is presented by embedding an "invisible" signal, known as digital signature, in the digital image by minimizing the energy content of the signature signal in higher frequencies.
Abstract: A method for copyright protection of digital images is presented. Copyright protection is achieved by embedding an "invisible" signal, known as digital signature, in the digital image. Signature casting is performed in the spatial domain by slightly modifying the intensity level of randomly selected image pixels. Signature detection is done by comparing the mean intensity value of the marked pixels against that of the not marked pixels. Statistical hypothesis testing is used for this purpose. The signature can be designed in such a way that it is resistant to JPEG compression and lowpass filtering. This is done by minimizing the energy content of the signature signal in higher frequencies. Experiments on real image data verify the effectiveness of the method.

251 citations


Proceedings ArticleDOI
14 Oct 1996
TL;DR: A new approach for automatically segmentation and tracking of faces in color images by evaluating color and shape information and finding regions with elliptical shape selected as face hypotheses is presented.
Abstract: The authors present a new approach for automatically segmentation and tracking of faces in color images. Segmentation of faces is performed by evaluating color and shape information. First, skin-like regions are determined based on the color attributes hue and saturation. Then regions with elliptical shape are selected as face hypotheses. They are verified by searching for facial features in their interior. After a face is reliably detected it is tracked over time. Tracking is realized by using an active contour model. The exterior forces of the snake are defined based on color features. They push or pull snaxels perpendicular to the snake. Results for tracking are shown for an image sequence consisting of 150 frames.

208 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: Toral automorphisms are used as chaotic 2-D integer vector generators in order to manipulate digital image watermarking and an embedding algorithm is proposed which provides robustness under filtering and compression.
Abstract: Digital watermarking methods have been proposed for various purposes and especially for copyright protection of multimedia data. The digital watermark is embedded in a digital signal or an image and must be unrecognizable by unauthorized persons and detectable only by the legal copyright owner. We use toral automorphisms as chaotic 2-D integer vector generators in order to manipulate digital image watermarking. We propose also an embedding algorithm which provides robustness under filtering and compression.

192 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: This paper performs face localization based on the observation that human faces are characterized by their oval shape and skin-color, also in the case of varying light conditions, and segment faces by evaluating shape and color information.
Abstract: Recognition of human faces out of still images or image sequences is a research field of fast increasing interest. At first, facial regions and facial features like eyes and mouth have to be extracted. In the present paper we propose an approach that copes with problems of these first two steps. We perform face localization based on the observation that human faces are characterized by their oval shape and skin-color, also in the case of varying light conditions. For that we segment faces by evaluating shape and color (HSV) information. Then face hypotheses are verified by searching for facial features inside of the face-like regions. This is done by applying morphological operations and minima localization to intensity images.

179 citations


Journal ArticleDOI
TL;DR: The median radial basis function (MRBF) algorithm is introduced based on robust estimation of the hidden unit parameters and employs the marginal median for kernel location estimation and the median of the absolute deviations for the scale parameter estimation.
Abstract: Radial basis functions (RBFs) consist of a two-layer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and its output is fed to an output unit. In order to find the parameters of a neural network which embeds this structure we take into consideration two different statistical approaches. The first approach uses classical estimation in the learning stage and it is based on the learning vector quantization algorithm and its second-order statistics extension. After the presentation of this approach, we introduce the median radial basis function (MRBF) algorithm based on robust estimation of the hidden unit parameters. The proposed algorithm employs the marginal median for kernel location estimation and the median of the absolute deviations for the scale parameter estimation. A histogram-based fast implementation is provided for the MRBF algorithm. The theoretical performance of the two training algorithms is comparatively evaluated when estimating the network weights. The network is applied in pattern classification problems and in optical flow segmentation.

148 citations


Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper presents a robust approach for the extraction of facial regions and features out of color images based on the color and shape information and results are shown for two example scenes.
Abstract: There are many applications for systems coping with the problem of face localization and recognition, e.g. model-based video coding, security systems and mug shot matching. Due to variations in illumination, back-ground, visual angle and facial expressions, the problem of machine face recognition is complex. In this paper we present a robust approach for the extraction of facial regions and features out of color images. First, face candidates are located based on the color and shape information. Then the topographic grey-level relief of facial regions is evaluated to determine the position of facial features as eyes and month. Results are shown for two example scenes.

144 citations



Journal ArticleDOI
TL;DR: Several adaptive least mean squares L-filters, both constrained and unconstrained ones, are developed for noise suppression in images and compared and it is demonstrated that the location-invariant LMS L-filter can be described in terms of the generalized linearly constrained adaptive processing structure proposed by Griffiths and Jim (1982).
Abstract: Several adaptive least mean squares (LMS) L-filters, both constrained and unconstrained ones, are developed for noise suppression in images and compared in this paper. First, the location-invariant LMS L-filter for a nonconstant signal corrupted by zero-mean additive white noise is derived. It is demonstrated that the location-invariant LMS L-filter can be described in terms of the generalized linearly constrained adaptive processing structure proposed by Griffiths and Jim (1982). Subsequently, the normalized and the signed error LMS L-filters are studied. A modified LMS L-filter with nonhomogeneous step-sizes is also proposed in order to accelerate the rate of convergence of the adaptive L-filter. Finally, a signal-dependent adaptive filter structure is developed to allow a separate treatment of the pixels that are close to the edges from the pixels that belong to homogeneous image regions.

75 citations


Journal ArticleDOI
TL;DR: Novel multichannel methods in two target research areas, color image modeling and color image equalization, are presented, which is performed on the three RGB channels simultaneously, using the joint PDF.
Abstract: We present novel multichannel methods in two target research areas. The first area is color image modeling. Multichannel AR models have been developed and applied to color texture segmentation and synthesis. The second area is color image equalization, which is performed on the three RGB channels simultaneously, using the joint PDF. Alternatively, equalization at the HSI domain is performed in order to avoid changes in digital image hue. A parallel algorithm is proposed for color image histogram calculation and equalization.

Journal ArticleDOI
TL;DR: Simulation results from both simulated and real B-mode ultrasonic images are presented, which verify the (qualitative and quantitative) superiority of the technique over a number of commonly used speckle filters.

Journal ArticleDOI
TL;DR: Different approaches to the analysis of biological signals based on non-linear methods, despite the greater methodological and computational complexity is, in many instances, more successful compared to linear approaches, in enhancing important parameters for both physiological studies and clinical protocols.

Journal ArticleDOI
TL;DR: A special case of the novel LVQ class is the median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location.
Abstract: We propose a novel class of learning vector quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location. The performance of the proposed marginal median LVQ in color image quantization is demonstrated by experiments.

01 Jan 1996
TL;DR: LTS1 Reference LTS-CONF-1996-005 Record created on 2006-06-14, modified on 2016-08-08.
Abstract: Keywords: LTS1 Reference LTS-CONF-1996-005 Record created on 2006-06-14, modified on 2016-08-08

Journal ArticleDOI
TL;DR: The so-called reduced ordering (R-ordering) principle is used to introduce a new family of L filters for vector-valued observations and the coefficients of the proposed filters can be deduced so that the filters are optimal with respect to the output mean squared error.
Abstract: Nonlinear multichannel signal processing is an emerging research topic with numerous applications. In this paper we use the so-called reduced ordering (R-ordering) principle to introduce a new family of L filters for vector-valued observations. The coefficients of the proposed filters can be deduced so that the filters are optimal with respect to the output mean squared error. Expressions for the unconstrained, unbiased and location invariant optimal filter coefficients are derived. The calculation of moments of the R-ordered vectors that are involved in these expressions is also discussed. Experiments with noisy two-channel vector fields and noisy color images are presented in order to demonstrate the superiority of the proposed filters over other multichannel filters.

Proceedings ArticleDOI
01 Sep 1996
TL;DR: A new approach to digital image signatures (watermarks) is proposed in this study, which employs a linear type constraint among the selected DCT coefficients and assigns circular detection regions, similar to the vector quantization techniques.
Abstract: A new approach to digital image signatures (watermarks) is proposed in this study An image signature algorithm consists of two stages : signature casting and signature detection In the first stage, small changes are embedded in the image which afterwards are identified in the second stage After chosing certain pixel blocks from the image, a constraint is embedded among their Discrete Cosine Transform (DCT) coefficients Two different embedding rules are proposed The first one employs a linear type constraint among the selected DCT coefficients and the second assigns circular detection regions, similar to the vector quantization techniques The resistance of the digital signature to JPEG compression and to filtering are analyzed

Journal ArticleDOI
TL;DR: The discrete-time one-dimensional multichannel transforms proposed in this paper are related to two-dimensional single-channel transforms, notably to the discrete Fourier transform (DFT) and to the DHT.
Abstract: This paper presents a novel approach to the Fourier analysis of multichannel time series. Orthogonal matrix functions are introduced and are used in the definition of multichannel Fourier series of continuous-time periodic multichannel functions. Orthogonal transforms are proposed for discrete-time multichannel signals as well. It is proven that the orthogonal matrix functions are related to unitary transforms (e.g., discrete Hartley transform (DHT), Walsh-Hadamard transform), which are used for single-channel signal transformations. The discrete-time one-dimensional multichannel transforms proposed in this paper are related to two-dimensional single-channel transforms, notably to the discrete Fourier transform (DFT) and to the DHT. Therefore, fast algorithms for their computation can be easily constructed. Simulations on the use of discrete multichannel transforms on color image compression have also been performed.

Proceedings ArticleDOI
02 Dec 1996
TL;DR: An approach for reconstructing images painted on curved surfaces by using a priori knowledge about the support surface of the picture to derive the surface localization in the camera coordinate system.
Abstract: The paper presents an approach for reconstructing images painted on curved surfaces. A set of monocular images is taken from different viewpoints in order to mosaic and represent the entire scene. By using a priori knowledge about the support surface of the picture, we derive the surface localization in the camera coordinate system. An automatic mosaicing method is applied on the patterned images in order to obtain the complete scene. The mosaiced scene is visualized on a new synthetic surface by a mapping procedure.

Journal ArticleDOI
TL;DR: A novel extension of the barrel shifter networks for 2-D signals is introduced, and the implementation of the proposed algorithms on them is also discussed.
Abstract: Parallel algorithms on barrel shifter computers for a broad class of 1-D and 2-D signal operators are presented. The max/min selection filter, the moving average filter, and the sorting and sliding window fast Fourier transform algorithms are examined. The proposed algorithms require a significantly smaller number of comparisons/computations than the conventional ones. A novel extension of the barrel shifter networks for 2-D signals is introduced, and the implementation of the proposed algorithms on them is also discussed.

Proceedings ArticleDOI
08 Sep 1996
TL;DR: A new variation of Hough transform is proposed that is iteratively split into fuzzy cells which are defined as fuzzy numbers and gives better accuracy in curve estimation than the classical Hough Transform.
Abstract: In this paper a new variation of Hough transform is proposed. The parameter space of Hough Transform is iteratively split into fuzzy cells which are defined as fuzzy numbers. Each fuzzy cell corresponds to a fuzzy curve in the spatial domain. After each iteration the fuzziness of the cells is reduced and the curves are estimated with better accuracy. The uncertainty of the contour point location is transferred to the parameter space and gives better accuracy in curve estimation than the classical Hough transform, especially when noisy images have to be used. Moreover, the computation time is significantly decreased, since the regions of the parameter space where contours do not correspond, are rejected during the iterations.

Journal ArticleDOI
TL;DR: A general approach for residual representation of one-dimensional and two-dimensional signals is defined and several morphological constructive transforms are proposed and their corresponding representations are discussed.
Abstract: A general approach for residual representation of one-dimensional (1-D) and two-dimensional (2-D) signals is defined. Signals are reconstructed as a sum of components that are recursively determined by using a constructive transform. Several morphological constructive transforms are proposed and their corresponding representations are discussed. The use of residual representations in signal and image compression is investigated with promising results.

Book ChapterDOI
01 Jan 1996
TL;DR: It will be shown that the nonlinear mean filters behave better than or equally well to the grayscale morphological transformations, for certain types of noise.
Abstract: The classical grayscale morphological operators erosion and dilation are characterized by strong nonlinearities, a feature that is not desirable in certain image filtering applications. This paper investigates the use of a subclass of nonlinear mean filters, instead of the classical morphological transformation. The nonlinearities of these filters can be hardened or softened progressively, by means of a controlling parameter. The statistical properties of these “soft” morphological filters are investigated. It will be shown that the nonlinear mean filters behave better than or equally well to the grayscale morphological transformations, for certain types of noise.

Proceedings ArticleDOI
01 Sep 1996
TL;DR: A technique for estimating the invariant motion parameters of non-translational motion fields is proposed, which leads to more efficient estimation, smoothing or coding of the motion field.
Abstract: Motion estimation is a very important topic in computer vision and image sequence compression. However, most commonly used motion estimation algorithms do not take into consideration any motion invariances that a certain local motion might possess. In this paper, a technique for estimating the invariant motion parameters of non-translational motion fields is proposed, which leads to more efficient estimation, smoothing or coding of the motion field. It is shown that the algorithm performs well, even in high noise levels, i.e., in the case of noisy output of the motion estimator.


Proceedings ArticleDOI
01 Sep 1996
TL;DR: A new variation of Hough Transform is proposed that can be used to detect shapes or curves in an image, with better accuracy, especially in noisy images, based on a fuzzy split of the Hough transform parameter space.
Abstract: In this paper a new variation of Hough Transform is proposed. It can be used to detect shapes or curves in an image, with better accuracy, especially in noisy images. It is based on a fuzzy split of the Hough Transform parameter space. The parameter space is split into fuzzy cells which are defined as fuzzy numbers. This fuzzy split of the parameter space provides the advantage to use the uncertainty of the contour points location, which is increased when noisy images have to be used. Moreover the computation time is slightly increased by this method, in comparison with classical Hough Transform.

Proceedings ArticleDOI
12 May 1996
TL;DR: The L/sub p/ comparators, which are based on the theory of nonlinear mean filters are introduced, and it is shown that the disadvantage of introducing errors is counter-balanced by their faster performance, when compared to the performance of classical comparators.
Abstract: In certain signal processing applications there is a need for fast hardware implementations of sorting algorithms and networks. So far, classical minimum/maximum comparators have been utilized in various sorting network topologies. However, these comparators can not attain high speeds in operation, due to limitations in digital technology. This paper introduces the L/sub p/ comparators, which are based on the theory of nonlinear mean filters. It is shown that the disadvantage of introducing errors is counter-balanced by their faster performance, when compared to the performance of classical comparators. A novel L/sub p/ comparator-based sorting network is also presented, for the fast calculation of the median of a data set. In this implementation, the number of steps required to produce the ordered output is not related to the number of inputs.

Proceedings ArticleDOI
13 Oct 1996
TL;DR: A simulation of an access control system that deals with the problem of access control and copyright protection for broadcasted image-related services, which includes mainly Pay-TV, but also the existing and forthcoming multimedia services which are expected to share the same channels as digital TV for their diffusion in the future.
Abstract: This paper presents a simulation of an access control system that deals with the problem of access control and copyright protection for broadcasted image-related services. The services that concentrate our interest include mainly Pay-TV, but also the existing and forthcoming multimedia services which are expected to share the same channels as digital TV for their diffusion in the future. We assume that these services have the basic characteristic of interactivity, which implies the existence of a return channel for the users to the service providers. The simulation is concentrated on the basic operations provided by the system, that make it flexible and, at the same time, make possible or help the access control and the billing of the services.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A new variation of Hough transform that is recursively roughly split into fuzzy cells which are defined as fuzzy numbers can be used to detect contours in an image, with better accuracy, especially in noisy images.
Abstract: In this paper a new variation of Hough transform is proposed. The parameter space of Hough transform is recursively roughly split into fuzzy cells which are defined as fuzzy numbers. This fuzzy partition of the parameter space provides the advantage to use the uncertainty of the contour points location, which is increased when noisy images have to be used. It can be used to detect contours in an image, with better accuracy, especially in noisy images. Moreover, the regions of the parameter space having no contours are rejected during the iterations. As a result the computation time is significantly decreased.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: Three novel adaptive multichannel L-filters based on marginal data ordering that outperform the other candidates in noise suppression for color images corrupted by mixed impulsive and additive white contaminated Gaussian noise are proposed.
Abstract: Three novel adaptive multichannel L-filters based on marginal data ordering are proposed. They rely on well-known algorithms for the unconstrained minimization of the mean squared error (MSE), namely, the least mean squares (LMS), the normalized LMS (NLMS) and the LMS-Newton (LMSN) algorithm. Performance comparisons in color image filtering have been made both in RGB and U/sup */V/sup */W/sup */ color spaces. The proposed adaptive multichannel L-filters outperform the other candidates in noise suppression for color images corrupted by mixed impulsive and additive white contaminated Gaussian noise.