scispace - formally typeset
Search or ask a question
Author

Mehmet U. Celik

Bio: Mehmet U. Celik is an academic researcher from Philips. The author has contributed to research in topics: Watermark & Digital watermarking. The author has an hindex of 18, co-authored 45 publications receiving 1332 citations.

Papers
More filters
Proceedings ArticleDOI
28 Oct 2007
TL;DR: A new error-resilient privacy-preserving string searching protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.
Abstract: Human Desoxyribo-Nucleic Acid (DNA) sequences offer a wealth of information that reveal, among others, predisposition to various diseases and paternity relations. The breadth and personalized nature of this information highlights the need for privacy-preserving protocols. In this paper, we present a new error-resilient privacy-preserving string searching protocol that is suitable for running private DNA queries. This protocol checks if a short template (e.g., a string that describes a mutation leading to a disease), known to one party, is present inside a DNA sequence owned by another party, accounting for possible errors and without disclosing to each party the other party's input. Each query is formulated as a regular expression over a finite alphabet and implemented as an automaton. As the main technical contribution, we provide a protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.

239 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new construction of a collusion-secure fingerprinting code which is similar to a recent construction by Tardos but achieves shorter code lengths and allows for codes over arbitrary alphabets.
Abstract: Fingerprinting provides a means of tracing unauthorized redistribution of digital data by individually marking each authorized copy with a personalized serial number. In order to prevent a group of users from collectively escaping identification, collusion-secure fingerprinting codes have been proposed. In this paper, we introduce a new construction of a collusion-secure fingerprinting code which is similar to a recent construction by Tardos but achieves shorter code lengths and allows for codes over arbitrary alphabets. We present results for `symmetric' coalition strategies. For binary alphabets and a false accusation probability $$\varepsilon_1$$ , a code length of $$m\approx \pi^2 c_0^2\ln\frac{1}{\varepsilon_1}$$ symbols is provably sufficient, for large c 0, to withstand collusion attacks of up to c 0 colluders. This improves Tardos' construction by a factor of 10. Furthermore, invoking the Central Limit Theorem in the case of sufficiently large c 0, we show that even a code length of $$m\approx 1/2\pi^2 c_0^2\ln\frac{1}{\varepsilon_1}$$ is adequate. Assuming the restricted digit model, the code length can be further reduced by moving from a binary alphabet to a q-ary alphabet. Numerical results show that a reduction of 35% is achievable for q = 3 and 80% for q = 10.

132 citations

Posted Content
TL;DR: A new construction of a collusion-secure fingerprinting code which is similar to a recent construction by Tardos but achieves shorter code lengths and allows for codes over arbitrary alphabets and for ‘symmetric’ coalition strategies is introduced.
Abstract: Fingerprinting provides a means of tracing unauthorized redistribution of digital data by individually marking each authorized copy with a personalized serial number. In order to prevent a group of users from collectively escaping identification, collusion-secure fingerprinting codes have been proposed. In this paper, we introduce a new construction of a collusion-secure fingerprinting code which is similar to a recent construction by Tardos but achieves shorter code lengths and allows for codes over arbitrary alphabets. We present results for `symmetric' coalition strategies. For binary alphabets and a false accusation probability $$\varepsilon_1$$ , a code length of $$m\approx \pi^2 c_0^2\ln\frac{1}{\varepsilon_1}$$ symbols is provably sufficient, for large c 0, to withstand collusion attacks of up to c 0 colluders. This improves Tardos' construction by a factor of 10. Furthermore, invoking the Central Limit Theorem in the case of sufficiently large c 0, we show that even a code length of $$m\approx 1/2\pi^2 c_0^2\ln\frac{1}{\varepsilon_1}$$ is adequate. Assuming the restricted digit model, the code length can be further reduced by moving from a binary alphabet to a q-ary alphabet. Numerical results show that a reduction of 35% is achievable for q = 3 and 80% for q = 10.

129 citations

Journal ArticleDOI
TL;DR: This work introduces variables in place of Tardos' hard-coded constants and allows for an independent choice of the desired false positive (FP) and false negative (FN) error rates, and studies the statistical properties of the code.
Abstract: Tardos has proposed a randomized fingerprinting code that is provably secure against collusion attacks. We revisit his scheme and show that it has significantly better performance than suggested in the original paper. First, we introduce variables in place of Tardos' hard-coded constants and we allow for an independent choice of the desired false positive (FP) and false negative (FN) error rates. Following through Tardos' proofs with these modifications, we show that the code length can be reduced by more than a factor of two in typical content distribution applications where high FN rates can be tolerated. Second, we study the statistical properties of the code. Under some reasonable assumptions, the accusation sums can be regarded as Gaussian-distributed stochastic variables. In this approximation, the desired error rates are achieved by a code length twice shorter than in the first approach. Overall, typical FP and FN error rates may be achieved with a code length approximately five times shorter than in the original construction.

104 citations

Posted Content
TL;DR: In this paper, the authors show that the Tardos scheme can be used with a code length approximately 5 times shorter than in the original construction, using a Gaussian approximation for the probability density functions of the accusations.
Abstract: We review the fingerprinting scheme by Tardos and show that it has a much better performance than suggested by the proofs in Tardos' original paper. In particular, the length of the codewords can be significantly reduced. First we generalize the proofs of the false positive and false negative error probabilities with the following modifications: (1) we replace Tardos' hard-coded numbers by variables and (2) we allow for independently chosen false positive and false negative error rates. It turns out that all the collusion-resistance properties can still be proven when the code length is reduced by a factor of more than 2. Second, we study the statistical properties of the fingerprinting scheme, in particular the average and variance of the accusations. We identify which colluder strategy forces the content owner to employ the longest code. Using a gaussian approximation for the probability density functions of the accusations, we show that the required false negative and false positive error rate can be achieved with codes that are a factor 2 shorter than required for rigid proofs. Combining the results of these two approaches, we show that the Tardos scheme can be used with a code length approximately 5 times shorter than in the original construction.

87 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
Lixin Luo1, Zhenyong Chen1, Ming Chen1, Xiao Zeng1, Zhang Xiong1 
TL;DR: A novel reversible watermarking scheme using an interpolation technique, which can embed a large amount of covert data into images with imperceptible modification, and can provide greater payload capacity and higher image fidelity compared with other state-of-the-art schemes.
Abstract: Watermarking embeds information into a digital signal like audio, image, or video. Reversible image watermarking can restore the original image without any distortion after the hidden data is extracted. In this paper, we present a novel reversible watermarking scheme using an interpolation technique, which can embed a large amount of covert data into images with imperceptible modification. Different from previous watermarking schemes, we utilize the interpolation-error, the difference between interpolation value and corresponding pixel value, to embed bit ?1? or ?0? by expanding it additively or leaving it unchanged. Due to the slight modification of pixels, high image quality is preserved. Experimental results also demonstrate that the proposed scheme can provide greater payload capacity and higher image fidelity compared with other state-of-the-art schemes.

645 citations

Patent
15 Nov 2006
TL;DR: In this article, the authors present methods and systems for encoding digital watermarks into content signals, including window identifier for identifying a sample window in the signal; an interval calculator for determining a quantization interval of the sample window; and a sampler for normalizing sample window to provide normalized samples.
Abstract: Disclosed herein are methods and systems for encoding digital watermarks into content signals. Also disclosed are systems and methods for detecting and/or verifying digital watermarks in content signals. According to one embodiment, a system for encoding of digital watermark information includes: a window identifier for identifying a sample window in the signal; an interval calculator for determining a quantization interval of the sample window; and a sampler for normalizing the sample window to provide normalized samples. According to another embodiment, a system for pre-analyzing a digital signal for encoding at least one digital watermark using a digital filter is disclosed. According to another embodiment, a method for pre-analyzing a digital signal for encoding digital watermarks comprises: (1) providing a digital signal; (2) providing a digital filter to be applied to the digital signal; and (3) identifying an area of the digital signal that will be affected by the digital filter based on at least one measurable difference between the digital signal and a counterpart of the digital signal selected from the group consisting of the digital signal as transmitted, the digital signal as stored in a medium, and the digital signal as played backed. According to another embodiment, a method for encoding a watermark in a content signal includes the steps of (1) splitting a watermark bit stream; and (2) encoding at least half of the watermark bit stream in the content signal using inverted instances of the watermark bit stream. Other methods and systems for encoding/decoding digital watermarks are also disclosed.

603 citations

Journal ArticleDOI
TL;DR: A unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user, and when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction.
Abstract: With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.

563 citations

Proceedings ArticleDOI
16 Oct 2012
TL;DR: In this paper, the authors provide a provable-security treatment for garbling schemes, endowing them with a versatile syntax and multiple security definitions, including privacy, obliviousness, and authenticity.
Abstract: Garbled circuits, a classical idea rooted in the work of Yao, have long been understood as a cryptographic technique, not a cryptographic goal. Here we cull out a primitive corresponding to this technique. We call it a garbling scheme. We provide a provable-security treatment for garbling schemes, endowing them with a versatile syntax and multiple security definitions. The most basic of these, privacy, suffices for two-party secure function evaluation (SFE) and private function evaluation (PFE). Starting from a PRF, we provide an efficient garbling scheme achieving privacy and we analyze its concrete security. We next consider obliviousness and authenticity, properties needed for private and verifiable outsourcing of computation. We extend our scheme to achieve these ends. We provide highly efficient blockcipher-based instantiations of both schemes. Our treatment of garbling schemes presages more efficient garbling, more rigorous analyses, and more modularly designed higher-level protocols.

483 citations