scispace - formally typeset
Search or ask a question

Showing papers on "High dynamic range published in 2015"


Journal ArticleDOI
TL;DR: A 512 × 424 time-of-flight (TOF) depth image sensor designed in a TSMC 0.13 μm LP 1P5M CMOS process, suitable for use in Microsoft Kinect for XBOX ONE, shows wide depth range of operation, small accuracy error, very low depth uncertainty, and very high dynamic range.
Abstract: We introduce a 512 × 424 time-of-flight (TOF) depth image sensor designed in a TSMC 0.13 μm LP 1P5M CMOS process, suitable for use in Microsoft Kinect for XBOX ONE. The 10 μm × 10 μm pixel incorporates a TOF detector that operates using the quantum efficiency modulation (QEM) technique at high modulation frequencies of up to 130 MHz, achieves a modulation contrast of 67% at 50 MHz and a responsivity of 0.14 A/W at 860 nm. The TOF sensor includes a 2 GS/s 10 bit signal path, which is used for the high ADC bandwidth requirements of the system that requires many ADC conversions per frame. The chip also comprises a clock generation circuit featuring a programmable phase and frequency clock generator with 312.5-ps phase step resolution derived from a 1.6 GHz oscillator. An integrated shutter engine and a programmable digital micro-sequencer allows an extremely flexible multi-gain/multi-shutter and multi-frequency/multi-phase operation. All chip data is transferred using two 4-lane MIPI D-PHY interfaces with a total of 8 Gb/s input/output bandwidth. The reported experimental results demonstrate a wide depth range of operation (0.8–4.2 m), small accuracy error ( $ 1%), very low depth uncertainty ( $ 0.5% of actual distance), and very high dynamic range ( $>$ 64 dB).

204 citations


Journal ArticleDOI
TL;DR: A rank minimization algorithm is presented which simultaneously aligns LDR images and detects outliers for robust HDR generation and is evaluated systematically and qualitatively with results from the state-of-the-art HDR algorithms using challenging real world examples.
Abstract: This paper introduces a new high dynamic range (HDR) imaging algorithm which utilizes rank minimization. Assuming a camera responses linearly to scene radiance, the input low dynamic range (LDR) images captured with different exposure time exhibit a linear dependency and form a rank-1 matrix when stacking intensity of each corresponding pixel together. In practice, misalignments caused by camera motion, presences of moving objects, saturations and image noise break the rank-1 structure of the LDR images. To address these problems, we present a rank minimization algorithm which simultaneously aligns LDR images and detects outliers for robust HDR generation. We evaluate the performances of our algorithm systematically using synthetic examples and qualitatively compare our results with results from the state-of-the-art HDR algorithms using challenging real world examples.

181 citations


Journal ArticleDOI
TL;DR: The main contribution is toward improving the frequency-based pooling in HDR-VDP-2 to enhance its objective quality prediction accuracy by formulating and solving a constrained optimization problem and thereby finding the optimal pooling weights.
Abstract: With the emergence of high-dynamic range (HDR) imaging, the existing visual signal processing systems will need to deal with both HDR and standard dynamic range (SDR) signals. In such systems, computing the objective quality is an important aspect in various optimization processes (e.g., video encoding). To that end, we present a newly calibrated objective method that can tackle both HDR and SDR signals. As it is based on the previously proposed HDR-VDP-2 method, we refer to the newly calibrated metric as HDR-VDP-2.2. Our main contribution is toward improving the frequency-based pooling in HDR-VDP-2 to enhance its objective quality prediction accuracy. We achieve this by formulating and solving a constrained optimization problem and thereby finding the optimal pooling weights. We also carried out extensive cross-validation as well as verified the performance of the new method on independent databases. These indicate clear improvement in prediction accuracy as compared with the default pooling weights. The source codes for HDR-VDP-2.2 are publicly available online for free download and use.

170 citations


Journal ArticleDOI
TL;DR: Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs.
Abstract: Tone mapping operators (TMOs) aim to compress high dynamic range (HDR) images to low dynamic range (LDR) ones so as to visualize HDR images on standard displays. Most existing TMOs were demonstrated on specific examples without being thoroughly evaluated using well-designed and subject-validated image quality assessment models. A recently proposed tone mapped image quality index (TMQI) made one of the first attempts on objective quality assessment of tone mapped images. Here, we propose a substantially different approach to design TMO. Instead of using any predefined systematic computational structure for tone mapping (such as analytic image transformations and/or explicit contrast/edge enhancement), we directly navigate in the space of all images, searching for the image that optimizes an improved TMQI. In particular, we first improve the two building blocks in TMQI—structural fidelity and statistical naturalness components—leading to a TMQI-II metric. We then propose an iterative algorithm that alternatively improves the structural fidelity and statistical naturalness of the resulting image. Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs. Meanwhile, these results also validate the superiority of TMQI-II over TMQI. 1 1 Partial preliminary results of this work were presented at ICASSP 2013 and ICME 2014.

133 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a high dynamic range (HDR) camera system capable of providing hemispherical sky imagery from the circumsolar region to the horizon at a high spatial, temporal, and radiometric resolution.
Abstract: . To facilitate the development of solar power forecasting algorithms based on ground-based visible wavelength remote sensing, we have developed a high dynamic range (HDR) camera system capable of providing hemispherical sky imagery from the circumsolar region to the horizon at a high spatial, temporal, and radiometric resolution. The University of California, San Diego Sky Imager (USI) captures multispectral, 16 bit, HDR images as fast as every 1.3 s. This article discusses the system design and operation in detail, provides a characterization of the system dark response and photoresponse linearity, and presents a method to evaluate noise in high dynamic range imagery. The system is shown to have a radiometrically linear response to within 5% in a designated operating region of the sensor. Noise for HDR imagery is shown to be very close to the fundamental shot noise limit. The complication of directly imaging the sun and the impact on solar power forecasting is also discussed. The USI has performed reliably in a hot, dry environment, a tropical coastal location, several temperate coastal locations, and in the great plains of the United States.

83 citations


Proceedings ArticleDOI
24 Apr 2015
TL;DR: It is shown that with limited bit depth, very high radiance levels can be recovered from a single modulus image with the newly proposed unwrapping algorithm for natural images.
Abstract: This paper presents a novel framework to extend the dynamic range of images called Unbounded High Dynamic Range (UHDR) photography with a modulo camera. A modulo camera could theoretically take unbounded radiance levels by keeping only the least significant bits. We show that with limited bit depth, very high radiance levels can be recovered from a single modulus image with our newly proposed unwrapping algorithm for natural images. We can also obtain an HDR image with details equally well preserved for all radiance levels by merging the least number of modulus images. Synthetic experiment and experiment with a real modulo camera show the effectiveness of the proposed approach.

80 citations


Proceedings ArticleDOI
19 Mar 2015
TL;DR: A TDC architecture is presented which combines the two step iterated TCSPC process of time-code generation, followed by memory lookup, increment and write, into one parallel direct-to-histogram conversion.
Abstract: Time-correlated single photon counting (TCSPC) is a photon-efficient technique to record ultra-fast optical waveforms found in numerous applications such as time-of-flight (ToF) range measurement (LIDAR) [1], ToF 3D imaging [2], scanning optical microscopy [3], diffuse optical tomography (DOT) and Raman sensing [4]. Typical instrumentation consists of a pulsed laser source, a discrete detector such as an avalanche photodiode (APD) or photomultiplier tube (PMT), time-to-digital converter (TDC) card and a FPGA or PC to assemble and compute histograms of photon time stamps. Cost and size restrict the number of channels of TCSPC hardware. Having few detection and conversion channels, the technique is limited to processing optical waveforms with low intensity, with less than one returned photon per laser pulse, to avoid pile-up distortion [4]. However, many ultra-fast optical waveforms exhibit high dynamic range in the number of photons emitted per laser pulse. Examples are signals observed at close range in ToF with multiple reflections, diffuse reflected photons in DOT or local variations in fluorescent dye concentration in microscopy. This paper provides a single integrated chip that reduces conventional TCSPC pile-up mechanisms by an order of magnitude through ultra-parallel realizations of both photon detection and time-resolving hardware. A TDC architecture is presented which combines the two step iterated TCSPC process of time-code generation, followed by memory lookup, increment and write, into one parallel direct-to-histogram conversion. The sensor achieves 71.4ps resolution, over 18.85ns dynamic range, with 14GS/s throughput. The sensor can process 1.7Gphoton/s and generate 21k histograms/s (with 4.6μs readout time), each capturing a total of 1.7kphotons in a 1μs exposure.

79 citations


Journal ArticleDOI
TL;DR: In this article, the in situ beampattern of the MWA antenna tile relative to that of the reference antenna is measured using power ratio measurements, which cancels the variation of satellite flux or polarization with time.
Abstract: Detection of the fluctuations in a 21 cm line emission from neutral hydrogen during the Epoch of Reionization in thousand hour integrations poses stringent requirements on calibration and image quality, both of which necessitate accurate primary beam models. The Murchison Widefield Array (MWA) uses phased-array antenna elements which maximize collecting area at the cost of complexity. To quantify their performance, we have developed a novel beam measurement system using the 137 MHz ORBCOMM satellite constellation and a reference dipole antenna. Using power ratio measurements, we measure the in situ beampattern of the MWA antenna tile relative to that of the reference antenna, canceling the variation of satellite flux or polarization with time. We employ angular averaging to mitigate multipath effects (ground scattering) and assess environmental systematics with a null experiment in which the MWA tile is replaced with a second-reference dipole. We achieve beam measurements over 30 dB dynamic range in beam sensitivity over a large field of view (65% of the visible sky), far wider and deeper than drift scans through astronomical sources allow. We verify an analytic model of the MWA tile at this frequency within a few percent statistical scatter within the full width at half maximum. Toward the edges of the main lobe and in the sidelobes, we measure tens of percent systematic deviations. We compare these errors with those expected from known beamforming errors.

57 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: This paper addresses ego-motion estimation for an event-based vision sensor using a continuous-time framework to directly integrate the information conveyed by the sensor.
Abstract: Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), do not output a sequence of video frames like standard cameras, but a stream of asynchronous events. An event is triggered when a pixel detects a change of brightness in the scene. An event contains the location, sign, and precise timestamp of the change. The high dynamic range and temporal resolution of the DVS, which is in the order of micro-seconds, make this a very promising sensor for high-speed applications, such as robotics and wearable computing. However, due to the fundamentally different structure of the sensor's output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. In this paper, we address ego-motion estimation for an event-based vision sensor using a continuous-time framework to directly integrate the information conveyed by the sensor. The DVS pose trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines and it is optimized according to the observed events. We evaluate our method using datasets acquired from sensor-in-the-loop simulations and onboard a quadrotor performing flips. The results are compared to the ground truth, showing the good performance of the proposed technique.

55 citations


Journal ArticleDOI
TL;DR: A novel all-elastomer MEMS tactile sensor with high dynamic force range is presented in this paper, where conductive elastomeric capacitors formed from electrodes of varying heights enable robust sensing in both shear and normal directions.
Abstract: A novel all-elastomer MEMS tactile sensor with high dynamic force range is presented in this work. Conductive elastomeric capacitors formed from electrodes of varying heights enable robust sensing in both shear and normal directions without the need for multi-layered assembly. Sensor geometry has been tailored to maximize shear force sensitivity using multi-physics finite element simulations. A simple molding microfabrication process is presented to rapidly create the sensing skins with electrode gaps of 20 m and sensor spacing of 3 mm. Shear force resolution was found to be as small as 50 mN and tested up to a range of 2 N (dynamic range of ). Normal force resolution was found to be 190 mN with a tested range of 8 N (dynamic range of ). Single load and multiload tests were conducted and the sensor exhibited intended behavior with low deviations between trials. Spatial tests were conducted on a sensor array and a spatial resolution of 1.5 mm was found.

54 citations


Journal ArticleDOI
TL;DR: The AGIPD—(Adaptive Gain Integrating Pixel Detector) is a hybrid pixel X-ray detector developed by a collaboration between Deutsches Elektronen-Synchrotron, Paul-Scherrer-Institut, University of Hamburg and the University of Bonn, and is now being manufactured.
Abstract: AGIPD—(Adaptive Gain Integrating Pixel Detector) is a hybrid pixel X-ray detector developed by a collaboration between Deutsches Elektronen-Synchrotron (DESY), Paul-Scherrer-Institut (PSI), University of Hamburg and the University of Bonn. The detector is designed to comply with the requirements of the European XFEL. The radiation tolerant Application Specific Integrated Circuit (ASIC) is designed with the following highlights: high dynamic range, spanning from single photon sensitivity up to 104 12.5keV photons, achieved by the use of the dynamic gain switching technique using 3 possible gains of the charge sensitive preamplifier. In order to store the image data, the ASIC incorporates 352 analog memory cells per pixel, allowing also to store 3 voltage levels corresponding to the selected gain. It is operated in random-access mode at 4.5MHz frame rate. The data acquisition is done during the 99.4ms between the bunch trains. The AGIPD has a pixel area of 200× 200 μ m2 and a 500μ m thick silicon sensor is used. The architecture principles were proven in different experiments and the ASIC characterization was done with a series of development prototypes. The mechanical concept was developed in the close contact with the XFEL beamline scientists and is now being manufactured. A first single module system was successfully tested at APS.

Journal ArticleDOI
TL;DR: Simulation and experimental results show that the proposed technique can accurately measure the 3D profile of objects with wide variation in their optical reflectivity.
Abstract: In this paper a new approach to enhance the dynamic range of a fringe projection system for measuring 3D profile of objects with wide variation in their optical reflections is proposed. The high dynamic range fringe images are acquired by recursively controlling the intensity of the projection pattern at pixel level based on the feedback from the reflected images captured by the camera. A four step phase shifting algorithm combined with a quality guided algorithm is used to obtain the unwrapped phase map of the object from the acquired high dynamic range fringe images. Simulation and experimental results show that the proposed technique can accurately measure the 3D profile of objects with wide variation in their optical reflectivity.

Journal ArticleDOI
TL;DR: High-dynamic-range technology aims at capturing, distributing, and displaying a range of luminance and color values that better correspond to what the human eye can perceive.
Abstract: High-dynamic-range (HDR) technology has attracted a lot of attention recently, especially in commercial trade shows such as the Consumer Electronics Show, the National Association of Broadcasters Show, the International Broadcasting Convention, and Internationale Funkausstellung Berlin. However, a great deal of mystery still surrounds this new evolution in digital media. In a nutshell, HDR technology aims at capturing, distributing, and displaying a range of luminance and color values that better correspond to what the human eye can perceive. Here, the term luminance stands for the photometric quantity of light arriving at the human eye measured in candela per square meter or nits. The color refers to all the weighted combinations of spectral wavelengths, expressed in nanometers (nm), emitted by the sun that are visible by the human eye (see Figure 1). The human eye can perceive a dynamic range of over 14 orders of magnitude (i.e., the difference in powers of ten between highest and lowest luminance value) in the real world through adaptation. However, at a single adaptation time, the human eye can only resolve up to five orders of magnitude, as illustrated in Figure 2. Dynamic range denotes the ratio between the highest and lowest luminance value. As reported in Table 1, there are different interpretations for dynamic range, depending on the application. For instance, in photography, dynamic range is measured in terms of f-stops, which correspond to the number of times that the light intensity can be doubled.

Reference EntryDOI
15 Jun 2015
TL;DR: A broad review of the HDR methods and technologies with an introduction to fundamental concepts behind the perception of HDR imagery and image and video quality metrics suitable for HDR content are offered.
Abstract: High dynamic range (HDR) images and video contain pixels, which can represent much greater range of colors and brightness levels than that offered by existing, standard dynamic range images. Such “better pixels” greatly improve the overall quality of visual content, making it appear much more realistic and appealing to the audience. HDR is one of the key technologies of the future imaging pipeline, which will change the way the digital visual content is represented and manipulated. This article offers a broad review of the HDR methods and technologies with an introduction to fundamental concepts behind the perception of HDR imagery. It serves as both an introduction to the subject and a review of the current state of the art in HDR imaging. It covers the topics related to capture of HDR content with cameras and its generation with computer graphics methods; encoding and compression of HDR images and video; tone mapping for displaying HDR content on standard dynamic range displays; inverse tone mapping for upscaling legacy content for presentation on HDR displays; the display technologies offering HDR range; and finally image and video quality metrics suitable for HDR content. Keywords: high dynamic range imaging; tone mapping

Journal ArticleDOI
TL;DR: In this article, an optomechanical accelerometer with high dynamic range, high bandwidth and readout noise levels below m s−2 was presented for on-site reference calibrations and autonomous navigation.
Abstract: We present an optomechanical accelerometer with high dynamic range, high bandwidth and readout noise levels below m s−2/. The straightforward assembly and low cost of our device makes it a prime candidate for on-site reference calibrations and autonomous navigation. We present experimental data taken with a vacuum sealed, portable prototype and deduce the achieved bias stability and the accuracy of the sensitivity. Additionally, we present a comprehensive model of the device physics that we use to analyze the fundamental noise sources and accuracy limitations of such devices.

Patent
01 Oct 2015
TL;DR: In this paper, a display device, including a content receiving unit configured to receive a high dynamic range image and an image processing unit, is configured to detect a first region whose luminance value is equal to or greater than a reference value within the high-level image and perform tone mapping on an image of the first region.
Abstract: A display device, including a content receiving unit configured to receive a high dynamic range image, an image processing unit configured to detect a first region whose luminance value is equal to or greater than a reference luminance value within the high dynamic range image and perform tone mapping on an image of the first region based on feature information of the image of the first region, and a display unit configured to display a low dynamic range image on which the tone mapping is performed.

Patent
19 Mar 2015
TL;DR: In this article, a method of encoding a high dynamic range image (M_HDR) comprising the steps of: - converting the high-dynamic range image into a low-luminance dynamic-range image (LDR_o) by applying: a) normalization of the image of high-D range at a luma axis scale that is [0, 1] giving a high normalized DRL image with normalized colors that have normalized luminance (Yn_HRL), b) calculate a gamma function on normalized luminances giving converted luminance with
Abstract: A method of encoding a high dynamic range image (M_HDR), comprising the steps of: - converting the high dynamic range image into a low luminance dynamic range image (LDR_o) by applying: a) normalization of the image of high dynamic range at a luma axis scale that is [0, 1] giving a high normalized dynamic range image with normalized colors that have normalized luminance (Yn_HDR), b) calculate a gamma function on normalized luminance giving converted luminance with gamma (xg), c) apply a first tone mapping that gives lumas (v) that is defined as ** Formula **, with RHO having a predetermined value, and d) apply a monotonously increasing arbitrary tone mapping function that maps the lumas with output lumas (Yn_LDR) of the lower dynamic range image (LDR_o); and - emit, in an image signal (S_im), an encoding of the pixel colors of the lower luminance dynamic range image (LDR_o), and - emit, in the image signal (S_im), values encoding the function forms of previous color conversions bad as metadata, or values for their inverse functions, metadata that allow a receiver to reconstruct a reconstructed high dynamic range image (Rec_HDR) from the low luminance dynamic range image (LDR_o ), where RHO or a value that is a function of RHO is issued in the metadata.

01 Jan 2015
TL;DR: AGIPD as discussed by the authors is a hybrid pixel X-ray detector developed by a collaboration between Deutsches Elektronen-Synchrotron (DESY), Paul-Scherrer- Institut (PSI), University of Hamburg and the University of Bonn.
Abstract: AGIPD — (Adaptive Gain Integrating Pixel Detector) is a hybrid pixel X-ray detector developed by a collaboration between Deutsches Elektronen-Synchrotron (DESY), Paul-Scherrer- Institut (PSI), University of Hamburg and the University of Bonn. The detector is designed to comply with the requirements of the European XFEL. The radiation tolerant Application Specific Integrated Circuit (ASIC) is designed with the following highlights: high dynamic range, spanning from single photon sensitivity up to 10 4 12.5keV photons, achieved by the use of the dynamic gain switching technique using 3 possible gains of the charge sensitive preamplifier. In order to store the image data, the ASIC incorporates 352 analog memory cells per pixel, allowing also to store 3 voltage levels corresponding to the selected gain. It is operated in random-access mode at 4.5MHz frame rate. The data acquisition is done during the 99.4ms between the bunch trains. The AGIPD has a pixel area of 200 200 mm 2 and a 500mm thick silicon sensor is used. The architecture

Journal ArticleDOI
TL;DR: Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods.
Abstract: With the development of modern image sensors enabling flexible image acquisition, single shot high dynamic range (HDR) imaging is becoming increasingly popular. In this work, we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. This allows all incoming photons to be used in the imaging. Previous methods on single shot HDR capture use spatially varying neutral density (ND) filters which lead to wasting incoming light. The main technical contribution in this work is an extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations (Kronander et al., Unified HDR reconstruction from raw CFA data, 2013; Hajisharif et al., HDR reconstruction for alternating gain (ISO) sensor readout, 2014). Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike a previous work, using adaptive filter kernels (Signal Process Image Commun 29(2):203–215, 2014), our algorithms are based on analyzing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure, we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the-shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain patterns and noise properties of the sensor.

Journal ArticleDOI
TL;DR: This paper presents a high dynamic range CMOS image sensor that implements an in-pixel content-aware adaptive global tone mapping algorithm during image capture operation that achieves high-frame rate allowing real-time high dynamicrange video.
Abstract:  Abstract—This paper presents a high dynamic range CMOS image sensor that implements an in-pixel content-aware adaptive global tone mapping algorithm during image capture operation. The histogram of the previous frame of an auxiliary image, which contains time stamp information, is employed as an estimation of the probability of illuminations impinging pixels at the present frame. The compression function of illuminations, namely Tone Mapping Curve (TMC), is calculated using this histogram. A QCIF resolution proof-of-concept prototype has been fabricated using a 0.35µm opto-flavored standard technology. The sensor is capable of mapping scenes with a maximum intra-frame dynamic range of 151dB (25-bits/pixel in linear representation) by compressing them to only 7-bits/pixel, while keeping visual quality in details and contrast. The in-pixel on-the-fly fully- parallel tone mapping achieves high frame rate allowing real- time HDR video (120dB@30fps).

Journal ArticleDOI
TL;DR: An active inductor with high linearity and high dynamic range, including a minimum number of components, is presented.
Abstract: In this paper, an active inductor AI with high linearity and high dynamic range, including a minimum number of components, is presented. The AI is composed by a single transistor, and by a passive compensation network; the latter allows the control of the values of both the inductance and the series resistance.


Journal ArticleDOI
TL;DR: In this article, a high dynamic range microwave power sensor compatible with the gallium arsenide monolithic microwave integrated circuit process is presented, which consists of a thermoelectric power sensor and a capacitive power sensor for low and high power detection.
Abstract: A high dynamic range microwave power sensor compatible with the gallium arsenide monolithic microwave integrated circuit process is presented. The power sensor consists of a thermoelectric power sensor and a capacitive power sensor for low and high power detection, respectively. To improve the dynamic range and optimise the impedance matching characteristic, the curled cantilever beam is utilised and the slot width of the coplanar waveguide transmission line is modified. The measured return loss is lower than –25.5 dB at 8–12 GHz. The output of the power sensor shows good linearity with the incident radio frequency power. For the incident power from 0.1 to 100 mW, the obtained sensitivities by a thermoelectric power sensor are about 0.0842, 0.0752 and 0.0701 mV/mW at 8, 10 and 12 GHz, respectively. For the incident power from 100 to 400 mW, the measured sensitivities by the capacitive power senor are about 0.0400, 0.0301 and 0.199 fF/mW at 8, 10 and 12 GHz, respectively.


Journal ArticleDOI
TL;DR: An architecture design of a hardware accelerator capable to expand the dynamic range of low dynamic range images to the 32-bit high dynamic range counterpart is presented, obtaining, in both implementations, state-of-the-art performances.
Abstract: In this paper, an architecture design of a hardware accelerator capable to expand the dynamic range of low dynamic range images to the 32-bit high dynamic range counterpart is presented. The processor implements on-the-fly calculation of the edge-preserving bilateral filtering and luminance average, to elaborate a full-HD (1920 $ \times $ 1080 pixels) image in 16.6 ms (60 frames/s) on field-programmable logic (FPL), by processing the incoming pixels in streaming order, without frame buffers. In this way, the design avoids the use of external DRAM and can be tightly coupled with acquiring devices, thus to enable the implementation of smart sensors. The processor complexity can be configured with different area/speed ratios to meet the requirements of different target platforms from FPLs to ASICs, obtaining, in both implementations, state-of-the-art performances.

Journal ArticleDOI
26 Oct 2015
TL;DR: This work proposes a new empirical model of local adaptation, that predicts how the adaptation signal is integrated in the retina, based on psychophysical measurements on a high dynamic range (HDR) display, and employs a novel approach to model discovery.
Abstract: The visual system constantly adapts to different luminance levels when viewing natural scenes. The state of visual adaptation is the key parameter in many visual models. While the time-course of such adaptation is well understood, there is little known about the spatial pooling that drives the adaptation signal. In this work we propose a new empirical model of local adaptation, that predicts how the adaptation signal is integrated in the retina. The model is based on psychophysical measurements on a high dynamic range (HDR) display. We employ a novel approach to model discovery, in which the experimental stimuli are optimized to find the most predictive model. The model can be used to predict the steady state of adaptation, but also conservative estimates of the visibility (detection) thresholds in complex images. We demonstrate the utility of the model in several applications, such as perceptual error bounds for physically based rendering, determining the backlight resolution for HDR displays, measuring the maximum visible dynamic range in natural scenes, simulation of afterimages, and gaze-dependent tone mapping.

Journal ArticleDOI
TL;DR: The proposed approach generates over- and under-exposed images by making use of a novel adaptive histogram separation scheme and utilizes a fuzzy logic based approach at the fusion stage which takes visibility of the inputs pixels into account.
Abstract: In this work, a high dynamic range (HDR) image generation method using a single input image is presented. The proposed approach generates over- and under-exposed images by making use of a novel adaptive histogram separation scheme. Thus, it becomes possible to eliminate ghosting effects which generally occur when several input image containing camera/object motion are utilized in HDR imaging. Additionally, it is proposed to utilize a fuzzy logic based approach at the fusion stage which takes visibility of the inputs pixels into account. Since the proposed approach is computationally light-weight, it is possible to implement it on mobile devices such as smart phones and compact cameras. Experimental results show that the proposed approach is able to provide ghost-free and improved HDR performance compared to the existing methods1.

Journal ArticleDOI
TL;DR: A monocular multiframe high dynamic range (HDR) monocular vision system to improve the imaging quality of traditional CMOS/charge-coupled device (CCD)-based vision system for advanced driver assistance systems (ADASs).
Abstract: In this paper, we propose a monocular multiframe high dynamic range (HDR) monocular vision system to improve the imaging quality of traditional CMOS/charge-coupled device (CCD)-based vision system for advanced driver assistance systems (ADASs). Conventional CMOS/CCD image sensors are confined to limited dynamic range that it impairs the imaging quality under undesirable environments for ADAS (e.g., strong contrast of bright and darkness, strong sunlight, headlights at night, and so on). Contrary to current HDR video solutions relying on expensive specially designed sensors, we implement a multiframe HDR algorithm to enable one common CMOS/CCD sensor capturing HDR video. Key parts of the realized HDR vision system are: 1) circular exposure control; 2) latent image calculation; and 3) exposure fusion. We have successfully realized a prototype of monocular HDR vision system and mounted it on our SetCar platform. The effectiveness of this technique is demonstrated by our experimental results, while its bottleneck is the processing time. By exploring the capability of the proposed method in the future, a low-cost HDR vision system can be achieved for ADAS.

Patent
26 Dec 2015
TL;DR: In this paper, the authors described a method, an apparatus, a system and at least one machine readable medium to generate standard dynamic range videos from high dynamic range video, the method comprising the steps of: applying an inverse gamma correction; applying a matrix multiplication that converts a color space; stretching a luminance range based at least in part on one or more stretching factors; and applying a forward gamma correction.
Abstract: Techniques are described for a method, an apparatus, a system and at least one machine readable medium to generate standard dynamic range videos from high dynamic range videos, the method comprising the steps of: applying an inverse gamma correction; applying a matrix multiplication that converts a color space; stretching a luminance range based at least in part on one or more stretching factors; and applying a forward gamma correction.

Proceedings ArticleDOI
26 May 2015
TL;DR: Evaluated HDR streams reconstructed from SDR videos and metadata, both compressed by the HEVC standard show that the single HDR approach is largely preferred over the SDR counterpart.
Abstract: High Dynamic Range (HDR) imaging is capable of delivering a wider range of luminance and color gamut compared to Standard Dynamic Range (SDR), offering to viewers a visual quality of experience close to that of real-life. In this study, we evaluate the quality of coded original HDR streams and HDR streams reconstructed from SDR videos and metadata, both compressed by the HEVC standard. Our evaluations have shown that the single HDR approach is largely preferred over the SDR counterpart.