A wide field, high dynamic range, stereographic viewer
read more
Citations
Advanced High Dynamic Range Imaging: Theory and Practice
Special Section on Advanced Displays: A survey on computational displays: Pushing the boundaries of optics, computation, and perception
A local model of eye adaptation for high dynamic range images
Visible difference predicator for high dynamic range images
High Dynamic Range Imaging and Low Dynamic Range Expansion for Generating HDR Content
References
Recovering high dynamic range radiance maps from photographs
Radiometric self calibration
A visibility matching tone reproduction operator for high dynamic range scenes
Visual Perception: Physiology, Psychology and Ecology
Tone reproduction for realistic images
Related Papers (5)
Frequently Asked Questions (14)
Q2. How many transparencies do the authors need to use?
Since the authors wish to produce images with a dynamic range in excess of 10,000:1, the authors need to use two transparencies, layered one on top of the other.
Q3. Why is the red image plane larger than the blue?
4.Due to chromatic aberration in the LEEP ARV-1 optics, it is best to precorrect the image by scaling the red image plane proportionally more than the blue image plane so that the red is about 1.5% larger than the blue, and the green is in between.
Q4. How do the authors reduce the resolution of one layer?
To avoid problems with transparency alignment and ghosting, the authors reduce the resolution of one layer using a Gaussian blur function.
Q5. What is the way to achieve the full dynamic range in a single image?
Although the film recorder the authors used is capable of producing nearly 1000:1 at the limit, the bottom reaches of this range have fairly large intensity steps.
Q6. Why do the authors convert to gray in the scaling layer?
Because the dynamic range of the individual color channels is not important for perception, the authors convert to gray in the scaling layer to simplify their calculations.
Q7. What is the way to reproduce the image in a darkened room?
It is also helpful to use the viewer in a darkened room if the scene being reproduced is very dim, as stray light can otherwise enter from the sides and obscure the view and adversely affect viewer adaptation.
Q8. What is the view vector for x, y, and z?
1.†ˆ v p = ˆ v x x + ˆ v y y + ˆ v z 1- x 2 - y 2 (1)Since each image covers only 120° rather than 180°, the only the corners of the image are perpendicular to the principal view axis.
Q9. What did the authors do to improve the quality of the image?
Although the authors could have used a longer focal length on their camera and thus captured better resolution in the chart image, the authors needed to capture as much of the surround as possible in order to have proper adaptation in the HDR viewer.
Q10. What did the authors find difficult in the quantitative measurement process?
The quantitative measurement process presented some challenges, as there are no luminance probes with sufficient spatial resolution and freedom from stray light to measure the very large gradients the authors produce in the viewer.
Q11. How does one achieve a monocular view?
For stereoscopic viewing, one must capture or render two images, and this is done by simplyshifting the real or virtual camera by the average interocular distance, which is approximately 2.5” (6.4 cm).
Q12. How many light steps are needed to avoid banding?
The useful range where the intensity steps are below the visiblethreshold necessary to avoid banding artifacts is closer to 100:1.
Q13. What is the problem with a wide-field display?
The challenge of achieving the necessary resolution and bandwidth, although significant, may be met with today’s PC graphics hardware.
Q14. What is the angle of the angle from the principal view axis?
In a hemispherical fisheye projection, the distance from the center of the image is equal to the sine of the angle from the principal view axis (i.e., depth).