scispace - formally typeset
Search or ask a question
Author

Mejdi Trimeche

Bio: Mejdi Trimeche is an academic researcher from Nokia. The author has contributed to research in topics: Pixel & Image restoration. The author has an hindex of 14, co-authored 31 publications receiving 1221 citations. Previous affiliations of Mejdi Trimeche include Tampere University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: A signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data.
Abstract: We present a simple and usable noise model for the raw-data of digital imaging sensors This signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data We further explicitly take into account the clipping of the data (over- and under-exposure), faithfully reproducing the nonlinear response of the sensor We propose an algorithm for the fully automatic estimation of the model parameters given a single noisy image Experiments with synthetic images and with real raw-data from various sensors prove the practical applicability of the method and the accuracy of the proposed model

789 citations

Patent
03 Oct 2007
TL;DR: In this article, the authors present a method, apparatus and software product for enhancing a dynamic range of an image with a multi-exposure pixel pattern taken by an image sensor of a camera for one or more color channels, wherein a plurality of groups of pixels of the image sensor have different exposure times.
Abstract: The specification and drawings present a new method, apparatus and software product for enhancing a dynamic range of an image with a multi-exposure pixel pattern taken by an image sensor of a camera for one or more color channels, wherein a plurality of groups of pixels of the image sensor have different exposure times (e.g., pre-selected or adjusted by a user through a user interface using a viewfinder feedback, or adjusted by a user through a user interface after taking and storing RAW image, etc.). Processing of the captured image for constructing an enhanced image of the image for each of the one or more color channels can be performed using weighted combination of exposure times of pixels having different pre-selected exposure times according to a predetermined criterion.

138 citations

Patent
08 Apr 2010
TL;DR: In this article, an apparatus comprising a processing unit configured to receive information related to available camera views of a 3D scene, request a synthetic view which is different from any available camera view and determined by the processing unit and receive media data comprising video data associated with the synthetic view.
Abstract: In accordance with an example embodiment of the present invention, an apparatus comprising a processing unit configured to receive information related to available camera views of a three dimensional scene, request a synthetic view which is different from any available camera view and determined by the processing unit and receive media data comprising video data associated with the synthetic view.

57 citations

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A new method of motion blur identification that relies on the availability of two, differently exposed, image shots of the same scene to identify the point spread function (PSF) corresponding to the motion blur, that may affect the longer exposed image shot.
Abstract: In this paper we introduce a new method of motion blur identification that relies on the availability of two, differently exposed, image shots of the same scene. The proposed approach exploits the difference in the degradation models of the two images in order to identify the point spread function (PSF) corresponding to the motion blur, that may affect the longer exposed image shot. The algorithm is demonstrated through a series of experiments that reveal its ability to identify the motion blur PSF even in the presence of heavy degradations of the two observed images.

55 citations

Patent
09 Jul 2004
TL;DR: In this article, a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component is used for obtaining a degradation function.
Abstract: This invention relates to a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component. In the method degradation information of each colour component of the image is found and is used for obtaining a degradation function. Each colour component is restored by said degradation function. The image is unprocessed image data, and the degradation information of each colour component can be found by a point-spread function. The invention also relates to a device, to a module, to a system and to a computer program product and to a program module.

50 citations


Cited by
More filters
Book
01 Jan 2009

8,216 citations

Patent
12 Nov 2013
TL;DR: In this paper, a variety of technologies by which existing functionality can be improved, and new functionality can also be provided, including visual search capabilities, and determining appropriate actions responsive to different image inputs.
Abstract: Cell phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others relate to coping with fixed focus limitations of cell phone cameras, e.g., in reading digital watermark data. Still others concern user interface improvements. A great number of other features and arrangements are also detailed.

2,033 citations

01 Jan 2004
TL;DR: A new image database, TID2008, for evaluation of full-reference visual quality assessment metrics is described and mean Opinion Scores (MOS) for this database have been obtained as a result of more than 800 experiments.
Abstract: In this paper, a new image database, TID2008, for evaluation of full-reference visual quality assessment metrics is described. It contains 1700 test images (25 reference images, 17 types of distortions for each reference image, 4 different levels of each type of distortion). Mean Opinion Scores (MOS) for this database have been obtained as a result of more than 800 experiments. During these tests, observers from three countries (Finland, Italy, and Ukraine) have carried out about 256000 individual human quality judgments. The obtained MOS can be used for effective testing of different visual quality metrics as well as for the design of new metrics. Using the designed image database, we have tested several known quality metrics. The designed test image database is freely available for downloading and utilization in scientific investigations.

1,069 citations

Proceedings ArticleDOI
15 Jun 2019
TL;DR: CBDNet as discussed by the authors proposes to train a convolutional blind denoising network with more realistic noise model and real-world clean image pairs to improve the generalization ability of deep CNN denoisers.
Abstract: While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy pho- tographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative met- rics and visual quality. The code has been made available at https://github.com/GuoShi28/CBDNet.

745 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper proposes a systematic procedure for estimating ground truth for noisy images that can be used to benchmark denoising performance for smartphone cameras and shows that CNN-based methods perform better when trained on the authors' high-quality dataset than when trained using alternative strategies, such as low-ISO images used as a proxy for ground truth data.
Abstract: The last decade has seen an astronomical shift from imaging with DSLR and point-and-shoot cameras to imaging with smartphone cameras. Due to the small aperture and sensor size, smartphone images have notably more noise than their DSLR counterparts. While denoising for smartphone images is an active research area, the research community currently lacks a denoising image dataset representative of real noisy images from smartphone cameras with high-quality ground truth. We address this issue in this paper with the following contributions. We propose a systematic procedure for estimating ground truth for noisy images that can be used to benchmark denoising performance for smartphone cameras. Using this procedure, we have captured a dataset - the Smartphone Image Denoising Dataset (SIDD) - of ~30,000 noisy images from 10 scenes under different lighting conditions using five representative smartphone cameras and generated their ground truth images. We used this dataset to benchmark a number of denoising algorithms. We show that CNN-based methods perform better when trained on our high-quality dataset than when trained using alternative strategies, such as low-ISO images used as a proxy for ground truth data.

552 citations