scispace - formally typeset
Search or ask a question

Showing papers on "High dynamic range published in 2014"


Journal ArticleDOI
TL;DR: This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently.
Abstract: Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. However they discard absolute intensity information which is useful for object recognition and classification. This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently. The active pixel sensor (APS) circuits and the DVS circuits within a pixel share a single photodiode. Measurements from a 240×180 sensor array of 18.5 μm 2 pixels fabricated in a 0.18 μm 6M1P CMOS image sensor (CIS) technology show a dynamic range of 130 dB with 11% contrast detection threshold, minimum 3 μs latency, and 3.5% contrast matching for the DVS pathway; and a 51 dB dynamic range with 0.5% FPN for the APS readout.

735 citations


Proceedings ArticleDOI
01 Jan 2014
TL;DR: This work shows for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range.
Abstract: An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering.

234 citations


Journal ArticleDOI
TL;DR: In this article, the design and characterization of a multipurpose 64 × 32 CMOS single-photon avalanche diode (SPAD) array is presented, which is fabricated in a high-voltage 0.35-μm CMOS technology and consists of 2048 pixels, each combining a very low noise (100 cps at 5-V excess bias) 30μm SPAD, a prompt avalanche sensing circuit, and digital processing electronics.
Abstract: We report on the design and characterization of a multipurpose 64 × 32 CMOS single-photon avalanche diode (SPAD) array. The chip is fabricated in a high-voltage 0.35-μm CMOS technology and consists of 2048 pixels, each combining a very low noise (100 cps at 5-V excess bias) 30-μm SPAD, a prompt avalanche sensing circuit, and digital processing electronics. The array not only delivers two-dimensional intensity information through photon counting in either free-running (down to 10-μs integration time) or time-gated mode, but can also perform smart light demodulation with in-pixel background suppression. The latter feature enables phase-resolved imaging for extracting either three-dimensional depth-resolved images or decay lifetime maps, by measuring the phase shift between a modulated excitation light and the reflected photons. Pixel-level memories enable fully parallel processing and global-shutter readout, preventing motion artifacts (e.g., skew, wobble, motion blur) and partial exposure effects. The array is able to acquire very fast optical events at high frame-rate (up to 100 000 fps) and at single-photon level. Low-noise SPADs ensure high dynamic range (up to 110 dB at 100 fps) with peak photon detection efficiency of almost 50% at 410 nm. The SPAD imager provides different operating modes, thus, enabling both time-domain applications, like fluorescence lifetime imaging (FLIM) and fluorescence correlation spectroscopy, as well as frequency-domain FLIM and lock-in 3-D ranging for automotive vision and lidar.

164 citations


Journal ArticleDOI
TL;DR: Experimental results show that the first technique can effectively improve the measurement accuracy of diffuse objects with LRR, the second one is capable of measuring object with weak specular reflection (WSR: e.g. shiny plastic surface) and the third can inspect surface with strong specular reflections precisely.

128 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: Results of the space-time saliency method on a benchmark dataset show it is state-of-the-art, and the benefits of the approach to HFR-to-LFR time-mapping over more direct methods are demonstrated in a user study.
Abstract: We describe a new approach for generating regular-speed, low-frame-rate (LFR) video from a high-frame-rate (HFR) input while preserving the important moments in the original. We call this time-mapping, a time-based analogy to high dynamic range to low dynamic range spatial tone-mapping. Our approach makes these contributions: (1) a robust space-time saliency method for evaluating visual importance, (2) a re-timing technique to temporally resample based on frame importance, and (3) temporal filters to enhance the rendering of salient motion. Results of our space-time saliency method on a benchmark dataset show it is state-of-the-art. In addition, the benefits of our approach to HFR-to-LFR time-mapping over more direct methods are demonstrated in a user study.

127 citations


Journal ArticleDOI
TL;DR: A ghost-free high dynamic range (HDR) image synthesis algorithm using a low-rank matrix completion framework, which is called RM-HDR, which can often provide significant gains in synthesized HDR image quality over state-of-the-art approaches.
Abstract: We propose a ghost-free high dynamic range (HDR) image synthesis algorithm using a low-rank matrix completion framework, which we call RM-HDR. Based on the assumption that irradiance maps are linearly related to low dynamic range (LDR) image exposures, we formulate ghost region detection as a rank minimization problem. We incorporate constraints on moving objects, i.e., sparsity, connectivity, and priors on under- and over-exposed regions into the framework. Experiments on real image collections show that the RM-HDR can often provide significant gains in synthesized HDR image quality over state-of-the-art approaches. Additionally, a complexity analysis is performed which reveals computational merits of RM-HDR over recent advances in deghosting for HDR.

120 citations


Journal ArticleDOI
TL;DR: A multi-polarization fringe projection (MPFP) imaging technique that eliminates saturated points and enhances the fringe contrast by selecting the proper polarized channel measurements is proposed.
Abstract: Traditional fringe-projection three-dimensional (3D) imaging techniques struggle to estimate the shape of high dynamic range (HDR) objects where detected fringes are of limited visibility. Moreover, saturated regions of specular reflections can completely block any fringe patterns, leading to lost depth information. We propose a multi-polarization fringe projection (MPFP) imaging technique that eliminates saturated points and enhances the fringe contrast by selecting the proper polarized channel measurements. The developed technique can be easily extended to include measurements captured under different exposure times to obtain more accurate shape rendering for very HDR objects.

109 citations


Journal ArticleDOI
TL;DR: A physiological inverse tone mapping algorithm inspired by the property of the Human Visual System (HVS) first imitates the retina response and deduce it to be local adaptive; then it estimates local adaptation luminance at each point in the image; finally, the LDR image and local luminance are applied to the inversed local retina response to reconstruct the dynamic range of the original scene.
Abstract: The mismatch between the Low Dynamic Range (LDR) content and the High Dynamic Range (HDR) display arouses the research on inverse tone mapping algorithms. In this paper, we present a physiological inverse tone mapping algorithm inspired by the property of the Human Visual System (HVS). It first imitates the retina response and deduce it to be local adaptive; then estimates local adaptation luminance at each point in the image; finally, the LDR image and local luminance are applied to the inversed local retina response to reconstruct the dynamic range of the original scene. The good performance and high-visual quality were validated by operating on 40 test images. Comparison results with several existing inverse tone mapping methods prove the conciseness and efficiency of the proposed algorithm.

105 citations


Patent
07 Feb 2014
TL;DR: In this paper, an image encoder and an image grading unit were proposed to allow graders to make optimally looking content of HDR scenes for various rendering displays, and a new saturation processing strategy useful in the newly emerging high dynamic range image handling technology.
Abstract: To allow graders to make optimally looking content of HDR scenes for various rendering displays, we invented an image encoder (202) comprising: an input (240) for a high dynamic range input image (M_HDR); an image grading unit (201) arranged to allow a human color grader to specify a color mapping from a representation (HDR_REP) of the high dynamic range input image defined according to a predefined accuracy, to a low dynamic range image (Im_LDR) by means of a human-determined color mapping algorithm, and arranged to output data specifying the color mapping (Fi(MP_DH)); and an automatic grading unit (203) arranged to derive a second low dynamic range image (GT_IDR) by applying an automatic color mapping algorithm to one of the high dynamic range input image (M_HDR) or the low dynamic range image (Im_LDR). We also describe and interesting new saturation processing strategy useful in the newly emerging high dynamic range image handling technology.

86 citations


Journal ArticleDOI
TL;DR: This paper proposes a comprehensive approach that can be used to improve the demodulation linearity of microwave DRSs, such that detailed time-domain motion information ranging from micro-scale to large scale can be accurately reconstructed.
Abstract: Miniaturized Doppler radar sensor (DRS) for noncontact motion detection is a hot topic in the microwave community. Previously, small-scale physiological signals such as human respiration and heartbeat rates are the primary interest of study. In this paper, we propose a comprehensive approach that can be used to improve the demodulation linearity of microwave DRSs, such that detailed time-domain motion information ranging from micro-scale to large scale can be accurately reconstructed. Experiments show that based on a digital-IF receiver architecture, dynamic dc offset tracking, and the extended differentiate and cross-multiply arctangent algorithm, the displacement and velocity of both micrometer-scale vibration of a tuning fork and meter-scale human walking can be accurately recovered. Our work confirms that substantial time-domain motion information is carried by the signals backscattered from moving objects. Retrieval of such information using DRSs can be potentially used in a wide range of healthcare and biomedical applications, such as motion pattern recognition and bio-signal measurements.

79 citations


Journal ArticleDOI
TL;DR: The saliency-aware weighting and the proposed filter are applied to design a new local tone-mapping algorithm for HDR images such that both extreme light and shadow regions can be reproduced on conventional low dynamic range displays.
Abstract: Visual saliency aims to predict the attentional gaze of observers viewing a scene, and it is thus highly demanded for tone mapping of high dynamic range (HDR) images. In this paper, novel saliency-aware weighting and edge-aware weighting are introduced for HDR images. They are incorporated into an existing guided image filter to form a perceptually guided image filter. The saliency-aware weighting and the proposed filter are applied to design a new local tone-mapping algorithm for HDR images such that both extreme light and shadow regions can be reproduced on conventional low dynamic range displays. In particular, the proposed filter is applied to decompose the luminance of the input HDR image into a base layer and a detail layer. The saliency-aware weighting is then adopted to design a saliency-aware global tone mapping for the compression of the base layer. The proposed filter preserves sharp edges in the base layer better than the existing guided filter. Halo artifacts are thus significantly reduced in the tone-mapped image. Moreover, the visual quality of the tone-mapped image, especially attention-salient regions, is improved by the saliency-aware weighting.

Proceedings ArticleDOI
06 Mar 2014
TL;DR: This paper presents the pixel and 2GS/s signal paths in a state-of-the-art Time- of-Flight (ToF) sensor suitable for use in the latest Kinect sensor for Xbox One.
Abstract: Interest in 3D depth cameras has been piqued by the release of the Kinect motion sensor for the Xbox 360 gaming console. This paper presents the pixel and 2GS/s signal paths in a state-of-the-art Time-of-Flight (ToF) sensor suitable for use in the latest Kinect sensor for Xbox One. ToF cameras determine the distance to objects by measuring the round trip travel time of an amplitude-modulated light from the source to the target and back to the camera at each pixel. ToF technology provides an accurate high pixel resolution, low motion blur, wide field of view (FoV), high dynamic range depth image as well as an ambient light invariant brightness image (active IR) that meets the highest quality requirements for 3D motion detection.

Journal ArticleDOI
TL;DR: This paper proposes a new method for detail enhancement and noise reduction of high dynamic range infrared images that is significantly better than those based on histogram equalization (HE), and it also has better visual effect than bilateral filter-based methods.

Journal ArticleDOI
TL;DR: In this paper, the authors present the general detector and ASIC design as well as the results of the prototype characterization measurements for the SwissFEL detector, a 48 × 48 pixel prototype produced in UMC110 nm technology.
Abstract: The SwissFEL, a free electron laser (FEL) based next generation X-ray source, is being built at PSI. An XFEL poses several challenges to the detector development: in particular the single photon counting readout, a successful scheme in case of synchrotron sources, can not be used. At the same time the data quality of photon counting systems, i.e. the low noise and the high dynamic range, is essential from an experimental point of view. Detectors with these features are under development for the EU-XFEL in Hamburg, with the PSI SLS Detector group being involved in one of these efforts (AGIPD). The pulse train time structure of the EU-XFEL machine forces the need of in pixel image storage, resulting in pixel pitches in the 200 μm range. Since the SwissFEL is a 100 Hz repetition rate machine, this constrain is relaxed. For this reason, PSI is developing a 75 μm pitch pixel detector that, thanks to its automatic gain switching technique, will achieve single photon resolution and a high dynamic range. The detector is modular, with each module consisting of a 4 × 8 cm2 active sensor bump bonded to 8 readout ASICs (Application Specific Integrated Circuit), connected to a single printed circuit readout board with 10GbE link capabilities for data download. We have designed and tested a 48 × 48 pixel prototype produced in UMC110 nm technology. In this paper we present the general detector and ASIC design as well as the results of the prototype characterization measurements.

Patent
30 Jun 2014
TL;DR: In this paper, a method of encoding a high dynamic range image, comprising the steps of: - inputting pixel colors of an input high-dynamic-range image, wherein the pixel colors have information of a luminance and a chromaticity; applying an inverse of a mapping function to derive a luma code (v) of the luminance of a pixel color, which mapping function is predetermined as comprising a first partial function which is defined as (I), in which rho is a tuning constant, and v is the luma codes corresponding to a luminANCE
Abstract: To enable better encoding of the currently starting to appear high dynamic range images for use in full high dynamic range technical systems (containing an HDR display, and e.g. in an HDR grading application of a HDR movie), we invented a method of encoding a high dynamic range image, comprising the steps of: - inputting pixel colors of an input high dynamic range image, wherein the pixel colors have information of a luminance and a chromaticity; - applying an inverse of a mapping function to derive a luma code (v) of the luminance of a pixel color, which mapping function is predetermined as comprising a first partial function which is defined as (I), in which rho is a tuning constant, and v is the luma code corresponding to a luminance to be encoded, and a second partial mapping defined as L = L m P Y in which Lm is a peak luminance of a predefined reference display, and gamma is a constant which is preferably equal to 2.4, - outputting a matrix of pixels having a color encoding comprising the luma codes.

Journal ArticleDOI
TL;DR: A fast and high dynamic range digital fringe projection technique is proposed to achieve fast and effective 3D measurement of shiny workpieces with dense point clouds and it improves the speed of 3D precise measurement of Shiny surface.

Journal ArticleDOI
TL;DR: The proposed algorithm is directed at making visible any contrast appearing across a dynamic range that exceeds display or printing capabilities through high dynamic range (HDR) compression while preserving the nature of the image structure and detail, lighting, and avoiding introducing discontinuities in illumination or image artifacts.

Patent
04 Aug 2014
TL;DR: In this paper, a base layer compression technique is used that analyzes the details and compresses the base layer accordingly to provide space at the top of the intensity scale where the details are displayed to thus generate output images that are visually better than images generated using conventional techniques.
Abstract: Methods, apparatus, and computer-readable storage media for tone mapping High Dynamic Range (HDR) images An input HDR image is separated into luminance and color Luminance is processed to obtain a base layer and a detail layer The base layer is compressed according to a non-linear remapping function to reduce the dynamic range, and the detail layer is adjusted The layers are combined to generate output luminance, and the output luminance and color are combined to generate an output image A base layer compression technique may be used that analyzes the details and compresses the base layer accordingly to provide space at the top of the intensity scale where the details are displayed to thus generate output images that are visually better than images generated using conventional techniques User interface elements may be provided via which a user may control one or more parameters of the tone mapping method

Proceedings ArticleDOI
02 May 2014
TL;DR: The proposed method permits to reconstruct an irradiance image by simultaneously estimating saturated and under-exposed pixels and denoising existing ones, showing significant improvements over existing approaches.
Abstract: Building high dynamic range (HDR) images by combining photographs captured with different exposure times present several drawbacks, such as the need for global alignment and motion estimation in order to avoid ghosting artifacts. The concept of spatially varying pixel exposures (SVE) proposed by Nayar et al. enables to capture in only one shot a very large range of exposures while avoiding these limitations. In this paper, we propose a novel approach to generate HDR images from a single shot acquired with spatially varying pixel exposures. The proposed method makes use of the assumption stating that the distribution of patches in an image is well represented by a Gaussian Mixture Model. Drawing on a precise modeling of the camera acquisition noise, we extend the piecewise linear estimation strategy developed by Yu et al. for image restoration. The proposed method permits to reconstruct an irradiance image by simultaneously estimating saturated and under-exposed pixels and denoising existing ones, showing significant improvements over existing approaches.

Journal ArticleDOI
TL;DR: The advantages of a novel wide dynamic range hard X-ray detector are demonstrated for (ptychographic) coherent X-rays diffractive imaging.
Abstract: Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 108 8-keV photons pixel−1 s−1, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 1010 photons µm−2 s−1 within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while `still' images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.

Proceedings ArticleDOI
14 Jul 2014
TL;DR: This work proposes a substantially different approach to design TMO, where instead of using any pre-defined systematic computational structure for tone mapping, the operator navigates in the space of all images, searching for the image that optimizes TMQI.
Abstract: An active research topic in recent years is to design tone mapping operators (TMOs) that convert high dynamic range (H-DR) to low dynamic range (LDR) images, so that HDR images can be visualized on standard displays. Nevertheless, most existing work has been done in the absence of a well-established and subject-validated image quality assessment (IQA) model, without which fair comparisons and further improvement are difficult. Recently, a tone mapped image quality index (TMQI) was proposed, which has shown to have good correlation with subjective evaluations of tone mapped images. Here we propose a substantially different approach to design TMO, where instead of using any pre-defined systematic computational structure (such as image transformation or contrast/edge enhancement) for tone mapping, we navigate in the space of all images, searching for the image that optimizes TMQI. The navigation involves an iterative process that alternately improves the structural fidelity and statistical naturalness of the resulting image, which are the two fundamental building blocks in TMQI. Experiments demonstrate the superior performance of the proposed method.

Journal ArticleDOI
TL;DR: A framework is presented that utilizes two cameras to realize a spatial exposure bracketing, for which the different exposures are distributed among the cameras, and which enables the use of more complex camera setups with different sensors and provides robust camera responses.
Abstract: To overcome the dynamic range limitations in images taken with regular consumer cameras, several methods exist for creating high dynamic range (HDR) content. Current low-budget solutions apply a temporal exposure bracketing which is not applicable for dynamic scenes or HDR video. In this article, a framework is presented that utilizes two cameras to realize a spatial exposure bracketing, for which the different exposures are distributed among the cameras. Such a setup allows for HDR images of dynamic scenes and HDR video due to its frame by frame operating principle, but faces challenges in the stereo matching and HDR generation steps. Therefore, the modules in this framework are selected to alleviate these challenges and to properly handle under- and oversaturated regions. In comparison to existing work, the camera response calculation is shifted to an offline process and a masking with a saturation map before the actual HDR generation is proposed. The first aspect enables the use of more complex camera setups with different sensors and provides robust camera responses. The second one makes sure that only necessary pixel values are used from the additional camera view, and thus, reduces errors in the final HDR image. The resulting HDR images are compared with the quality metric HDR-VDP-2 and numerical results are given for the first time. For the Middlebury test images, an average gain of 52 points on a 0-100 mean opinion score is achieved in comparison to temporal exposure bracketing with camera motion. Finally, HDR video results are provided.

Journal ArticleDOI
TL;DR: The recent results of a 20 GHz bandwidth high performance spectrum monitoring system with the additional capability of broadband direction finding demonstrates the potential for spatial-spectral systems to be the practical choice for solving demanding signal processing problems in the near future.
Abstract: Many storage and processing systems based on spectral holeburning have been proposed that access the broad bandwidth and high dynamic range of spatial-spectral materials, but only recently have practical systems been developed that exceed the performance and functional capabilities of electronic devices. This paper reviews the history of the proposed applications of spectral holeburning and spatial-spectral materials, from frequency domain optical memory to microwave photonic signal processing systems. The recent results of a 20 GHz bandwidth high performance spectrum monitoring system with the additional capability of broadband direction finding demonstrates the potential for spatial-spectral systems to be the practical choice for solving demanding signal processing problems in the near future.

Journal ArticleDOI
TL;DR: This paper proposes an HDR imaging approach using the coded electronic shutter which can capture a scene with row‐wise varying exposures in a single image, and enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures.
Abstract: Typical high dynamic range HDR imaging approaches based on multiple images have difficulties in handling moving objects and camera shakes, suffering from the ghosting effect and the loss of sharpness in the output HDR image. While there exist a variety of solutions for resolving such limitations, most of the existing algorithms are susceptible to complex motions, saturation, and occlusions. In this paper, we propose an HDR imaging approach using the coded electronic shutter which can capture a scene with row-wise varying exposures in a single image. Our approach enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures. Due to the concurrent capture of multiple exposures, misalignments of moving objects are naturally avoided with significant reduction in the ghosting effect. To handle the issues with under-/over-exposure, noise, and blurs, we present a coherent HDR imaging process where the problems are resolved one by one at each step. Experimental results with real photographs, captured using a coded electronic shutter, demonstrate that our method produces a high quality HDR images without the ghosting and blur artifacts.

Journal ArticleDOI
TL;DR: The study reveals that the proposed approach for phase estimation from noisy reconstructed interference fields in digital holographic interferometry using an unscented Kalman filter outperforms at lower SNR values (i.e., especially in the range 0-20 dB).
Abstract: In this research work, we introduce a novel approach for phase estimation from noisy reconstructed interference fields in digital holographic interferometry using an unscented Kalman filter. Unlike conventionally used unwrapping algorithms and piecewise polynomial approximation approaches, this paper proposes, for the first time to the best of our knowledge, a signal tracking approach for phase estimation. The state space model derived in this approach is inspired from the Taylor series expansion of the phase function as the process model, and polar to Cartesian conversion as the measurement model. We have characterized our approach by simulations and validated the performance on experimental data (holograms) recorded under various practical conditions. Our study reveals that the proposed approach, when compared with various phase estimation methods available in the literature, outperforms at lower SNR values (i.e., especially in the range 0-20 dB). It is demonstrated with experimental data as well that the proposed approach is a better choice for estimating rapidly varying phase with high dynamic range and noise. (C) 2014 Optical Society of America

Proceedings ArticleDOI
TL;DR: The results of the subjective experiment demonstrate that the preference of an average viewer increases logarithmically with the increase in the maximum luminance level at which HDR content is displayed, with 4000 cd=m2 being the most attractive option.
Abstract: High dynamic range (HDR) imaging is able to capture a wide range of luminance values, closer to what the human eye can perceive. However, for capture and display technologies, it is important to answer the question on the significance of higher dynamic range for user preference. This paper answers this question by investigating the added value of higher dynamic range via a rigorous set of subjective experiments using paired comparison methodology. Video sequences at four different peak luminance levels were displayed side-by-side on a Dolby Research HDR RGB backlight dual modulation display (aka `Pulsar'), which is capable of reliably displaying video content at 4000 cd/m2 peak luminance. The results of the subjective experiment demonstrate that the preference of an average viewer increases logarithmically with the increase in the maximum luminance level at which HDR content is displayed, with 4000 cd/m2 being the most attractive option.

Journal ArticleDOI
TL;DR: Compared with other well-established methods, the proposed gradient-domain-based visualization method shows a significant performance in terms of dynamic range compression, while enhancing the details and avoiding the common artifacts, such as halo, gradient reversal, hazy or saturation.

Journal ArticleDOI
03 Apr 2014-Leukos
TL;DR: In this paper, the authors investigated whether high dynamic range imaging (HDRI) can accurately capture luminance of a single light emitting diode (LED) chip within the luminaire.
Abstract: This study investigates whether high dynamic range imaging (HDRI) can accurately capture luminance of a single light emitting diode (LED) chip within the luminaire. Two conventional methods of determining luminance—the use of a luminance meter with a close-up lens and deriving luminance from illuminance measurements, source area, and distance—of a single LED chip are compared to HDRI measurements. The results show that HDRI using a Canon EOS 7D camera, fitted with 28–105 mm lens and a neutral density filter (with less than 1% transmittance) combined in Photosphere compares very well to a luminance value determined with goniophotometer measurements and calculations. It provides confidence in the ability of HDRI to capture luminance of a single LED chip.

Proceedings ArticleDOI
TL;DR: The challenges and capabilities of an implementation of a high dynamic range layered codec approach for high-efficiency video coding (HEVC) Main10 4K content for top tier over-the-top/video on demand (OTT/VOD) movie distributors known as “Dolby Vision” are described.
Abstract: This paper describes the challenges and capabilities of an implementation of a high dynamic range layered codec approach for high-efficiency video coding (HEVC) Main10 4K content for top tier over-the-top/video on demand (OTT/VOD) movie distributors known as “Dolby Vision.” While Dolby Vision is already designed to support both AVC (H.264) and HEVC (H.265), the current tools are designed for batch-mode testing with reference-level encoders that generally do not meet the performance and robustness demands of an automated production system. To create a viable production system, particular attention must be paid to optimization and parallelization of the HEVC pipeline to create the final dual layer streams. The production high dynamic range (HDR) HEVC encoder must also be integrated into the client encoder system application infrastructure, preserving the clients robustness, testability and quality requirements. The final streams must be tested in a similarly performant decoder platform for manual and automated test and verification. Because Dolby Vision is a novel technology that requires specialized hardware for viewing, careful consideration must be given to the visual verification process with a solution that offers powerful viewing tools and defect testability. Finally, the automated decoder verification must test both the backwards-compatible base layer and the high dynamic range enhancement layer. This paper covers these issues and the technical solutions that were implemented to address them.

Proceedings ArticleDOI
22 Jun 2014
TL;DR: In this article, a hybrid data acquisition incorporating sinusoidally-modulated phase shifting reduces signal-to-noise to the 0.1 nm/√Hz level for super-polished surfaces.
Abstract: Advances in the implementation of coherence scanning interferometry have dramatically extended the range of application for this well-known technique. New data acquisition and data processing methods significantly improve dynamic range, enabling measurements of steeply-sloped surfaces usually considered beyond the reach of high-NA objectives. Hybrid data acquisition incorporating sinusoidally-modulated phase shifting reduces signal-to-noise to the 0.1 nm/√Hz level, extending the technique to super-polished surfaces. OCIS codes: 120.3180 Interferometry; 120.3940 Metrology; 120.6660; Surface measurements, roughness