scispace - formally typeset
Search or ask a question
Author

Mihai Mitrea

Bio: Mihai Mitrea is an academic researcher from Institut Mines-Télécom. The author has contributed to research in topics: Digital watermarking & False alarm. The author has an hindex of 8, co-authored 79 publications receiving 372 citations. Previous affiliations of Mihai Mitrea include Artemis & Politehnica University of Bucharest.


Papers
More filters
Journal ArticleDOI
TL;DR: The theoretical framework allowing for the binary quantization index modulation (QIM) embedding techniques to be extended towards multiple-symbol QIM (m-QIM, where m stands for the number of symbols on which the mark is encoded prior to its embedding) is introduced.
Abstract: This paper introduces the theoretical framework allowing for the binary quantization index modulation (QIM) embedding techniques to be extended towards multiple-symbol QIM (m-QIM, where m stands for the number of symbols on which the mark is encoded prior to its embedding). The underlying detection method is optimized with respect to the minimization of the average error probability, under the hypothesis of white, additive Gaussian behavior for the attacks. This way, for prescribed transparency and robustness constraints, the data payload is increased by a factor of log"2m. m-QIM is experimentally validated under the frameworks of the MEDIEVALS French national project and of the SPY ITEA2 European project, related to MPEG-4 AVC robust and semi-fragile watermarking applications, respectively. The experiments are three-folded and consider the data payload-robustness-transparency tradeoff. In the former case, the main benefit is the increase of data payload by a factor of log"2m while keeping fixed robustness (variations lower than 3% of the bit error rate after additive noise, transcoding and Stirmark random bending attacks) and transparency (set to average PSNR=45dB and 65dB for SD and HD encoded content, respectively). The experiments consider 1h of video content. In the semi-fragile watermarking case, the m-QIM main advantage is a relative gain factor of 0.11 of PSNR for fixed robustness (against transcoding), fragility (to content alteration) and the data payload. The experiments consider 1h 20min of video content.

26 citations

Journal ArticleDOI
TL;DR: In this article, the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches were evaluated.
Abstract: This study aimed to evaluate the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches. Both methods were applied to a database of segmental and subsegmental bronchi with LA > 4 mm2 containing 42 bronchial segments of 10 successive slices that best matched on each acquisition. For both methods, the 95% confidence interval between repeated MDCT was between –1.59 and 1.5 mm2 for LA, and –3.31 and 2.96 mm2 for WA. The values of the coefficient of measurement variation (CV10, i.e., percentage ratio of the standard deviation obtained from the 10 successive slices to their mean value) were strongly correlated between repeated MDCT data acquisitions (r > 0.72; p < 0.0001). Compared with FWHM, LA values obtained using EDCE were higher for LA < 15 mm2, whereas WA values were lower for bronchi with WA < 13 mm2; no systematic EDCE underestimation or overestimation was observed for thicker-walled bronchi. In conclusion, variability between CT examinations and assessment techniques may impair measurements. Therefore, new parameters such as CV10 need to be investigated to study bronchial remodeling. Finally, EDCE and FWHM are not interchangeable in longitudinal studies.

22 citations

Proceedings ArticleDOI
TL;DR: A new watermarking scheme designed so as to reach the trade-off between transparency and robustness while ensuring a prescribed quantity of inserted information is advanced.
Abstract: Watermarking already imposed itself as an effective and reliable solution for conventional multimedia content protection (image/video/audio/3D). By persistently (robustly) and imperceptibly (transparently) inserting some extra data into the original content, the illegitimate use of data can be detected without imposing any annoying constraint to a legal user. The present paper deals with stereoscopic image protection by means of watermarking techniques. That is, we first investigate the peculiarities of the visual stereoscopic content from the transparency and robustness point of view. Then, we advance a new watermarking scheme designed so as to reach the trade-off between transparency and robustness while ensuring a prescribed quantity of inserted information. Finally, this method is evaluated on two stereoscopic image corpora (natural image and medical data).

22 citations

Journal Article
TL;DR: This paper presents a method designed to analyse the distribution of the 2D-DCT (bi-Dimensional Discrete Cosine Transform) coefficient hierarchy computed over three types of image sequences: colour video, grey-level X-rayCT images and computer simulated noise images.
Abstract: This paper presents a method designed to analyse the distribution of the 2D-DCT (bi-Dimensional Discrete Cosine Transform) coefficient hierarchy computed over three types of image sequences: colour video, grey-level X-rayCT (computerised tomography) images and computer simulated noise images This method is a statistical approach which combines in a new way the following four tests: (1) the Χ 2 (Chi-square) test on concordance between experimental data and a theoretical probability density function, (2) the p (Ro) test on correlation, (3) the Fisher F test on equality between two variances, and (4) the Student T test on equality between two means Such an approach was compulsory so as to mathematically overcome the dependency existing among successive images in the considered sequences The results obtained on natural sequences (either video or medical images) are compared to those corresponding to computer simulated sequences, interesting differences being pointed out and discussed The overall results may play a central role in a large variety of image/video processing applications: compression, segmentation, retrieval, protection (eg cryptography and watermarking) For instance, we successfully applied them to the design of a new robust watermarking method for colour video

17 citations

Proceedings ArticleDOI
TL;DR: A novel software architecture, based on BiFS - Binary Format for Scenes (MPEG-4 Part 11), where graphical content is parsed, converted and binary encoded into the BiFS format on the server side.
Abstract: Under the framework of the FP-7 European MobiThin project, the present study addresses the issue of remote display representation for mobile thin client. The main issue is to design a compressing algorithm for heterogeneous content (text, graphics, image and video) with low-complex decoding. As a first step in this direction, we propose a novel software architecture, based on BiFS - Binary Format for Scenes (MPEG-4 Part 11). On the server side, the graphical content is parsed, converted and binary encoded into the BiFS format. This content is then streamed to the terminal, where it is played on a simple MPEG player. The viability of this solution is validated by comparing it to the most intensively used wired solutions, e.g. VNC - Virtual Network Computing.

16 citations


Cited by
More filters
Book
01 Jan 2012
TL;DR: The most popular design of experiments (DOE) book is the 5E of as mentioned in this paper, which is the most recent version of the book and has been updated several times over the last four years.
Abstract: This is probably the most popular design of experiments (DOE) book. It is unquestionably the leading DOE book in industry. So I am excepting it from the usual disdain I show for the Ž fth edition (5E) of any book, even though it follows the fourth edition (4E) by only four years. See Grice (2000) for a report on the 4E. Very likely this book bears little resemblance to the Ž rst edition way back in the 1970s. The author has taken particular care with this latest edition to reorganize the book to be consistent with modern DOE practice rather than classical DOE presentation. The primary change has been to move the chapters on regression modeling and response surface methods forward in the book. In the 4E these were the last two chapters in the book. In the 5E they follow the chapters on factorial and fractional factorial designs. Three chapters on “other designs” now close out the book. This certainly gives the book a better  ow for industry short courses. Perhaps these will simply be referenced as arcane methods in the sixth edition! Other production values in the book also continue to improve. There are many high-quality graphical displays. There is much use of printouts from Minitab and Design-Expert. A student version of Design-Export comes with the book. There is also an instructor’s CD-ROM with supplemental material to make the book suitable for more advanced courses. There is the now-essential Web site that has more information to help students and instructors. Perhaps if I were teaching DOE for master’s-level statisticians, the recent book by Dean and Voss (1999), reviewed by Amidan (2000), would be my choice. However, for most other audiences this is the complete DOE book. Industrial statisticians should get a copy of the new version and ensure its exposure to the relevant people in their organization.

355 citations

Journal ArticleDOI
TL;DR: This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark.
Abstract: The illegal distribution of a digital movie is a common and significant threat to the film industry. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can now be easily distributed to a global audience. A possible means of limiting this type of digital theft is digital video watermarking whereby additional information, called a watermark, is embedded in the host video. This watermark can be extracted at the decoder and used to determine whether the video content is watermarked. This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark. It then provides an overview of a few emerging innovative solutions using watermarks. Protecting a 3D video by watermarking is an emerging area of research. The relevant 3D video watermarking techniques in the literature are classified based on the image-based representations of a 3D video in stereoscopic, depth-image-based rendering, and multi-view video watermarking. We discuss each technique, and then present a survey of the literature. Finally, we provide a summary of this paper and propose some future research directions.

181 citations

Book ChapterDOI
01 Jan 1993
TL;DR: In this paper a type of channel with side information is studied and its capacity determined.
Abstract: In certain communication systems where information is to be transmitted from one point to another, additional side information is available at the transmitting point. This side information relates to the state of the transmission channel and can be used to aid in the coding and transmission of information. In this paper a type of channel with side information is studied and its capacity determined.

171 citations

Journal ArticleDOI
01 Apr 2012-Lung
TL;DR: This review focuses on CT quantification techniques of COPD disease components and their current status and role in phenotyping COPD.
Abstract: Chronic obstructive pulmonary disease (COPD) is a heterogeneous disease that is characterized by chronic airflow limitation. Unraveling of this heterogeneity is challenging but important, because it might enable more accurate diagnosis and treatment. Because spirometry cannot distinguish between the different contributing pathways of airflow limitation, and visual scoring is time-consuming and prone to observer variability, other techniques are sought to start this phenotyping process. Quantitative computed tomography (CT) is a promising technique, because current CT technology is able to quantify emphysema, air trapping, and large airway wall dimensions. This review focuses on CT quantification techniques of COPD disease components and their current status and role in phenotyping COPD.

109 citations

Journal ArticleDOI
TL;DR: A robust and secure video steganographic algorithm in discrete wavelet transform (DWT) and discrete cosine Transform (DCT) domains based on the multiple object tracking (MOT) algorithm and error correcting codes is proposed.
Abstract: Over the past few decades, the art of secretly embedding and communicating digital data has gained enormous attention because of the technological development in both digital contents and communication. The imperceptibility, hiding capacity, and robustness against attacks are three main requirements that any video steganography method should take into consideration. In this paper, a robust and secure video steganographic algorithm in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains based on the multiple object tracking (MOT) algorithm and error correcting codes is proposed. The secret message is preprocessed by applying both Hamming and Bose, Chaudhuri, and Hocquenghem codes for encoding the secret data. First, motion-based MOT algorithm is implemented on host videos to distinguish the regions of interest in the moving objects. Then, the data hiding process is performed by concealing the secret message into the DWT and DCT coefficients of all motion regions in the video depending on foreground masks. Our experimental results illustrate that the suggested algorithm not only improves the embedding capacity and imperceptibility but also enhances its security and robustness by encoding the secret message and withstanding against various attacks.

94 citations