scispace - formally typeset
Search or ask a question
Author

Jeng-Shyang Pan

Bio: Jeng-Shyang Pan is an academic researcher from Shandong University of Science and Technology. The author has contributed to research in topics: Digital watermarking & Watermark. The author has an hindex of 50, co-authored 789 publications receiving 11645 citations. Previous affiliations of Jeng-Shyang Pan include National Kaohsiung Normal University & Technical University of Ostrava.


Papers
More filters
Proceedings ArticleDOI
21 May 2006
TL;DR: This paper presents a novel face recognition method based on complete Kernel Fisher discriminant (CKFD) analysis of Gabor features with power polynomial models, which gives superior results in the ORL and Yale face databases.
Abstract: This paper presents a novel face recognition method based on complete Kernel Fisher discriminant (CKFD) analysis of Gabor features with power polynomial models. By integrating the Gabor wavelet representation of face images and the enhanced powerful discriminator named CKFD analysis, the method is robust to changes in illumination and facial expressions and poses. On the other hand, the extended polynomial Kernels, namely fractional power polynomial (FPP) models, are employed in CKFD analysis, which enhance face recognition performance. Comparing with existing PCA, LDA, KPCA, KFD and CKFD methods, the proposed method gives superior results in the ORL and Yale face databases. Its good performance in the two face databases gives the promising idea to solve the pose, illumination, and expression (PIE) problem of face recognition.
Journal Article
TL;DR: It is shown that the main problem when employing 2-D non-separable wavelet transforms for testure classification is the determination of the suitable features and that yields the best classification results.
Abstract: In this paper. the performances of testure classification based on pyramidal and uniform decomposition are comparatively studied with and without feature selection. This comparison using the subband variance as feature explores the dependence among features. It is shown that the main problem when employing 2-D non-separable wavelet transforms for testure classification is the determination of the suitable features and that yields the best classification results. A Mas-Mas algorithm which is a novel evaluation function based on genetic algorithms is presented to evaluate the classification performance of each subset of selected features. Esperimental results have shown the selectivity of the proposed approach and do capture the testure characteristics.
Proceedings ArticleDOI
16 Oct 2013
TL;DR: This paper presents a method to detect the region-duplication forgery under affine transforms, where part of an image is copied and pasted somewhere else in the same image by comparing the NCH features.
Abstract: Region duplication is a common method to produce forgery images, where part of an image is copied and pasted somewhere else in the same image. In order to fit the scene better and leave no visible artifacts, the copied region may be processed by affine transforms before being pasted. Most of the existing methods cannot handle these transforms. This paper presents a method to detect the region-duplication forgery under affine transforms. The image is first filtered and divided into overlapping circular blocks. Then the normalized color histogram (NCH) is extracted as the block feature. Forgery detection is achieved by comparing the NCH features. A new filter is designed to process the initial detection results. The final detection map is obtained after morphological operations. Simulations demonstrate the efficiency of the method.

Cited by
More filters
Journal ArticleDOI
TL;DR: It is proved the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density.
Abstract: A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.

11,727 citations

Book
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.

2,849 citations

Proceedings Article
01 Jan 1999

2,010 citations

Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations