scispace - formally typeset
Search or ask a question
Author

Dimitrios Hatzinakos

Bio: Dimitrios Hatzinakos is an academic researcher from University of Toronto. The author has contributed to research in topics: Biometrics & Wireless sensor network. The author has an hindex of 46, co-authored 257 publications receiving 10061 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The problem of blind deconvolution for images is introduced, the basic principles and methodologies behind the existing algorithms are provided, and the current trends and the potential of this difficult signal processing problem are examined.
Abstract: The goal of image restoration is to reconstruct the original scene from a degraded observation. This recovery process is critical to many image processing applications. Although classical linear image restoration has been thoroughly studied, the more difficult problem of blind image restoration has numerous research possibilities. We introduce the problem of blind deconvolution for images, provide an overview of the basic principles and methodologies behind the existing algorithms, and examine the current trends and the potential of this difficult signal processing problem. A broad review of blind deconvolution methods for images is given to portray the experience of the authors and of the many other researchers in this area. We first introduce the blind deconvolution problem for general signal processing applications. The specific challenges encountered in image related restoration applications are explained. Analytic descriptions of the structure of the major blind deconvolution approaches for images then follows. The application areas, convergence properties, complexity, and other implementation issues are addressed for each approach. We then discuss the strengths and limitations of various approaches based on theoretical expectations and computer simulations.

1,332 citations

Journal ArticleDOI
01 Jul 1999
TL;DR: In this article, the authors proposed a fragile watermarking approach which embeds a watermark in the discrete wavelet domain of the image by quantizing the corresponding coefficients, which allows the user to make application-dependent decisions concerning whether an image which is JPEG compressed for instance, still has credibility.
Abstract: In this paper, we consider the problem of digital watermarking to ensure the credibility of multimedia. We specifically address the problem of fragile digital watermarking for the tamper proofing of still images. Applications of our problem include authentication for courtroom evidence, insurance claims, and journalistic photography. We present a novel fragile watermarking approach which embeds a watermark in the discrete wavelet domain of the image by quantizing the corresponding coefficients. Tamper detection is possible in localized spatial and frequency regions. Unlike previously proposed techniques, this novel approach provides information on specific frequencies of the image that have been modified. This allows the user to make application-dependent decisions concerning whether an image, which is JPEG compressed for instance, still has credibility. Analysis is provided to evaluate the performance of the technique to varying system parameters. In addition, we compare the performance of the proposed method to existing fragile watermarking techniques to demonstrate the success and potential of the method for practical multimedia tamper proofing and authentication.

554 citations

Proceedings ArticleDOI
12 May 1998
TL;DR: In this article, a multiresolution wavelet fusion technique is proposed for the digital watermarking of still images, which is robust to a variety of signal distortions and is not required for watermark extraction.
Abstract: We present a novel technique for the digital watermarking of still images based on the concept of multiresolution wavelet fusion. The algorithm is robust to a variety of signal distortions. The original unmarked image is not required for watermark extraction. We provide analysis to describe the behaviour of the method for varying system parameter values. We compare our approach with another transform domain watermarking method. Simulation results show the superior performance of the technique and demonstrate its potential for the robust watermarking of photographic imagery.

466 citations

Journal ArticleDOI
TL;DR: The gait analysis and recognition problem is exposed to the signal processing community and it will stimulates the involvement of more researchers in gait research in the future.
Abstract: This article provides an overview of the basic research directions in the field of gait analysis and recognition. The recent developments in gait research indicate that gait technologies still need to mature and that limited practical applications should be expected in the immediate future. At present, there is a potential for initial deployment of gait for recognition in conjunction with other biometrics. However, future advances in gait analysis and recognition - an open, challenging research area - are expected to result in wide deployment of gait technologies not only in surveillance, but in many other applications as well. This article exposes the gait analysis and recognition problem to the signal processing community and it will stimulates the involvement of more researchers in gait research in the future.

380 citations

Journal ArticleDOI
TL;DR: A fiducial-detection-based framework that incorporates analytic and appearance attributes is first introduced and a new approach based on autocorrelation (AC) in conjunction with discrete cosine transform (DCT) is proposed.
Abstract: Security concerns increase as the technology for falsification advances. There are strong evidences that a difficult to falsify biometric trait, the human heartbeat, can be used for identity recognition. Existing solutions for biometric recognition from electrocardiogram (ECG) signals are based on temporal and amplitude distances between detected fiducial points. Such methods rely heavily on the accuracy of fiducial detection, which is still an open problem due to the difficulty in exact localization of wave boundaries. This paper presents a systematic analysis for human identification from ECG data. A fiducial-detection-based framework that incorporates analytic and appearance attributes is first introduced. The appearance-based approach needs detection of one fiducial point only. Further, to completely relax the detection of fiducial points, a new approach based on autocorrelation (AC) in conjunction with discrete cosine transform (DCT) is proposed. Experimentation demonstrates that the AC/DCT method produces comparable recognition accuracy with the fiducial-detection-based approach.

361 citations


Cited by
More filters
Book
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.

2,849 citations

Journal ArticleDOI
01 Jul 1999
TL;DR: An overview of the information-hiding techniques field is given, of what the authors know, what works, what does not, and what are the interesting topics for research.
Abstract: Information-hiding techniques have recently become important in a number of application areas. Digital audio, video, and pictures are increasingly furnished with distinguishing but imperceptible marks, which may contain a hidden copyright notice or serial number or even help to prevent unauthorized copying directly. Military communications systems make increasing use of traffic security techniques which, rather than merely concealing the content of a message using encryption, seek to conceal its sender, its receiver, or its very existence. Similar techniques are used in some mobile phone systems and schemes proposed for digital elections. Criminals try to use whatever traffic security properties are provided intentionally or otherwise in the available communications systems, and police forces try to restrict their use. However, many of the techniques proposed in this young and rapidly evolving field can trace their history back to antiquity, and many of them are surprisingly easy to circumvent. In this article, we try to give an overview of the field, of what we know, what works, what does not, and what are the interesting topics for research.

2,561 citations

Book
18 Oct 2012
TL;DR: This rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance and make good design choices for future wireless architectures and protocols that efficiently manage interference effects.
Abstract: Covering point process theory, random geometric graphs and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs and accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques with practical examples and exercises, this is a comprehensive guide to the spatial stochastic models essential for modelling and analysis of wireless network performance.

2,327 citations

Journal ArticleDOI
TL;DR: This paper proposes an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models and demonstrates its superiority to other super-resolution methods.
Abstract: Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.

2,175 citations