scispace - formally typeset
Search or ask a question
Author

Hyoung Joong Kim

Other affiliations: Kangwon National University
Bio: Hyoung Joong Kim is an academic researcher from Korea University. The author has contributed to research in topics: Information hiding & Digital watermarking. The author has an hindex of 33, co-authored 167 publications receiving 4206 citations. Previous affiliations of Hyoung Joong Kim include Kangwon National University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases that employs prediction errors to embed data into an image.
Abstract: This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases. This algorithm employs prediction errors to embed data into an image. A sorting technique is used to record the prediction errors based on magnitude of its local variance. Using sorted prediction errors and, if needed, though rarely, a reduced size location map allows us to embed more data into the image with less distortion. The performance of the proposed reversible watermarking scheme is evaluated using different images and compared with four methods: those of Kamstra and Heijmans, Thodi and Rodriguez, and Lee et al. The results clearly indicate that the proposed scheme can embed more data with less distortion.

773 citations

Journal ArticleDOI
TL;DR: Performance of the proposed scheme is shown to be better than the original difference expansion scheme by Tian and its improved version by Kamstra and Heijmans and can be possible by exploiting the quasi-Laplace distribution of the difference values.
Abstract: Reversible data embedding theory has marked a new epoch for data hiding and information security. Being reversible, the original data and the embedded data should be completely restored. Difference expansion transform is a remarkable breakthrough in reversible data-hiding schemes. The difference expansion method achieves high embedding capacity and keeps distortion low. This paper shows that the difference expansion method with the simplified location map and new expandability can achieve more embedding capacity while keeping the distortion at the same level as the original expansion method. Performance of the proposed scheme in this paper is shown to be better than the original difference expansion scheme by Tian and its improved version by Kamstra and Heijmans. This improvement can be possible by exploiting the quasi-Laplace distribution of the difference values.

330 citations

Journal ArticleDOI
01 Mar 2009
TL;DR: The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well and indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment.
Abstract: In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.

228 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed watermarking algorithm is sustainable against compression algorithms such as MP3 and AAC, as well as common signal processing manipulation attacks.
Abstract: The paper presents the modified patchwork algorithm (MPA), a statistical technique for an audio watermarking algorithm in the transform (not only discrete cosine transform (DCT), but also DFT and DWT) domain. The MPA is an enhanced version of the conventional patchwork algorithm. The MPA is sufficiently robust to withstand some attacks defined by the Secure Digital Music Initiative (SDMI). Experimental results show that the proposed watermarking algorithm is sustainable against compression algorithms such as MP3 and AAC, as well as common signal processing manipulation attacks.

208 citations

Journal ArticleDOI
TL;DR: A new histogram shifting-based RDH scheme for JPEG images is proposed, in which the zero coefficients remain unchanged and only coefficients with values 1 and -1 are expanded to carry message bits, and a block selection strategy based on the number of zero coefficients in each 8 × 8 block can be utilized to adaptively choose DCT coefficients for data hiding.
Abstract: Among various digital image formats used in daily life, the Joint Photographic Experts Group (JPEG) is the most popular. Therefore, reversible data hiding (RDH) in JPEG images is important and useful for many applications such as archive management and image authentication. However, RDH in JPEG images is considerably more difficult than that in uncompressed images because there is less information redundancy in JPEG images than that in uncompressed images, and any modification in the compressed domain may introduce more distortion in the host image. Furthermore, along with the embedding capacity and fidelity (visual quality), which have to be considered for uncompressed images, the storage size of the marked JPEG file should be considered. In this paper, based on the philosophy behind the JPEG encoder and the statistical properties of discrete cosine transform (DCT) coefficients, we present some basic insights into how to select quantized DCT coefficients for RDH. Then, a new histogram shifting-based RDH scheme for JPEG images is proposed, in which the zero coefficients remain unchanged and only coefficients with values 1 and −1 are expanded to carry message bits. Moreover, a block selection strategy based on the number of zero coefficients in each $8\,\times \,8$ block is proposed, which can be utilized to adaptively choose DCT coefficients for data hiding. Experimental results demonstrate that by using the proposed method we can easily realize high embedding capacity and good visual quality. The storage size of the host JPEG file can also be well preserved.

174 citations


Cited by
More filters
Book
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.

2,849 citations

Journal ArticleDOI
TL;DR: In this paper, the authors report the current state of the theoretical research and practical advances on this subject and provide a comprehensive view of these advances in ELM together with its future perspectives.

1,289 citations

Journal ArticleDOI
TL;DR: This paper proposes a universal distortion design called universal wavelet relative distortion (UNIWARD) that can be applied for embedding in an arbitrary domain and demonstrates experimentally using rich models as well as targeted attacks that steganographic methods built using UNIWARD match or outperform the current state of the art in the spatial domain, JPEG domain, and side-informed JPEG domain.
Abstract: Currently, the most successful approach to steganography in empirical objects, such as digital media, is to embed the payload while minimizing a suitably defined distortion function. The design of the distortion is essentially the only task left to the steganographer since efficient practical codes exist that embed near the payload-distortion bound. The practitioner’s goal is to design the distortion to obtain a scheme with a high empirical statistical detectability. In this paper, we propose a universal distortion design called universal wavelet relative distortion (UNIWARD) that can be applied for embedding in an arbitrary domain. The embedding distortion is computed as a sum of relative changes of coefficients in a directional filter bank decomposition of the cover image. The directionality forces the embedding changes to such parts of the cover object that are difficult to model in multiple directions, such as textures or noisy regions, while avoiding smooth regions or clean edges. We demonstrate experimentally using rich models as well as targeted attacks that steganographic methods built using UNIWARD match or outperform the current state of the art in the spatial domain, JPEG domain, and side-informed JPEG domain.

859 citations

Book ChapterDOI
28 Jun 2010
TL;DR: A complete methodology for designing practical and highly-undetectable stegosystems for real digital media and explains why high-dimensional models might be problem in steganalysis, and introduces HUGO, a new embedding algorithm for spatial-domain digital images and its performance with LSB matching.
Abstract: This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

808 citations

Journal ArticleDOI
TL;DR: This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases that employs prediction errors to embed data into an image.
Abstract: This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases. This algorithm employs prediction errors to embed data into an image. A sorting technique is used to record the prediction errors based on magnitude of its local variance. Using sorted prediction errors and, if needed, though rarely, a reduced size location map allows us to embed more data into the image with less distortion. The performance of the proposed reversible watermarking scheme is evaluated using different images and compared with four methods: those of Kamstra and Heijmans, Thodi and Rodriguez, and Lee et al. The results clearly indicate that the proposed scheme can embed more data with less distortion.

773 citations