scispace - formally typeset
Search or ask a question
Topic

High dynamic range

About: High dynamic range is a research topic. Over the lifetime, 4280 publications have been published within this topic receiving 76293 citations. The topic is also known as: HDR.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the A-projection algorithm is applied to the Square Kilometer Array (LOFAR) to deal with non-unitary station beams and non-diagonal Mueller matrices.
Abstract: The aimed high sensitivities and large fields of view of the new generation of interferometers impose to reach high dynamic range of order $\sim$1:$10^6$ to 1:$10^8$ in the case of the Square Kilometer Array. The main problem is the calibration and correction of the Direction Dependent Effects (DDE) that can affect the electro-magnetic field (antenna beams, ionosphere, Faraday rotation, etc.). As shown earlier the A-Projection is a fast and accurate algorithm that can potentially correct for any given DDE in the imaging step. With its very wide field of view, low operating frequency ($\sim30-250$ MHz), long baselines, and complex station-dependent beam patterns, the Low Frequency Array (LOFAR) is certainly the most complex SKA precursor. In this paper we present a few implementations of A-Projection applied to LOFAR that can deal with non-unitary station beams and non-diagonal Mueller matrices. The algorithm is designed to correct for all the DDE, including individual antenna, projection of the dipoles on the sky, beam forming and ionospheric effects. We describe a few important algorithmic optimizations related to LOFAR's architecture allowing us to build a fast imager. Based on simulated datasets we show that A-Projection can give dramatic dynamic range improvement for both phased array beams and ionospheric effects. We will use this algorithm for the construction of the deepest extragalactic surveys, comprising hundreds of days of integration.

132 citations

Proceedings ArticleDOI
03 Nov 2004
TL;DR: A new model of eye adaptation based on physiological data is presented, which can function either as a static local tone mapping operator for single high dynamic range image, or as a temporal adaptation model taking into account time elapsed and intensity of preadaptation for a dynamic sequence.
Abstract: In the real world, the human eye is confronted with a wide range of luminances from bright sunshine to low night light. Our eyes cope with this vast range of intensities by adaptation; changing their sensitivity to be responsive at different illumination levels. This adaptation is highly localized, allowing us to see both dark and bright regions of a high dynamic range environment. In this paper we present a new model of eye adaptation based on physiological data. The model, which can be easily integrated into existing renderers, can function either as a static local tone mapping operator for single high dynamic range image, or as a temporal adaptation model taking into account time elapsed and intensity of preadaptation for a dynamic sequence. We finally validate our technique with a high dynamic range display and a psychophysical study.

131 citations

Proceedings ArticleDOI
18 May 2008
TL;DR: A fully asynchronous, time- based image sensor, which is characterized by high temporal resolution, low data rate, near complete temporal redundancy suppression, high dynamic range, and low power consumption is proposed.
Abstract: In this paper we propose a fully asynchronous, time- based image sensor, which is characterized by high temporal resolution, low data rate (near complete temporal redundancy suppression), high dynamic range, and low power consumption. Autonomous pixels asynchronously communicate the detection of relative changes in light intensity, and the time from change detection to the threshold crossing of a photocurrent integrator, so encoding the instantaneous pixel illumination shortly after the time of a detected change. The chip is being implemented in a standard 0.18 mum CMOS process and measures less than 10times8 mm2 at 304times240 pixel resolution.

130 citations

Patent
06 Feb 2013
TL;DR: In this paper, the authors present an array of cameras and imager arrays configured to capture high dynamic range light field image data and methods of capturing high-dynamic range image data in accordance with embodiments of the invention.
Abstract: Array cameras and imager arrays configured to capture high dynamic range light field image data and methods of capturing high dynamic range light field image data in accordance with embodiments of the invention are disclosed. Imager arrays in accordance with many embodiments of the invention include multiple focal planes with associated read out and sampling circuitry. The sampling circuitry controls the conversion of the analog image information into digital image data. In certain embodiments, the sampling circuitry includes an Analog Front End (AFE) and an Analog to Digital Converter (ADC). In several embodiments, the AFE is used to apply different amplification gains to analog image information read out from pixels in a given focal plane to provide increased dynamic range to digital image data generated by digitizing the amplified analog image information. The different amplifications gains can be applied in a predetermined manner or on a pixel by pixel basis.

129 citations

Proceedings ArticleDOI
07 Sep 2017
TL;DR: A novel, accurate tightly-coupled visual-inertial odometry pipeline for event cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes.
Abstract: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe- based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Fur- thermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s.

128 citations


Network Information
Related Topics (5)
Pixel
136.5K papers, 1.5M citations
88% related
Image processing
229.9K papers, 3.5M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
83% related
Image segmentation
79.6K papers, 1.8M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023122
2022263
2021164
2020243
2019238
2018262