scispace - formally typeset
G

Gyorgy Denes

Researcher at University of Cambridge

Publications -  12
Citations -  803

Gyorgy Denes is an academic researcher from University of Cambridge. The author has contributed to research in topics: Rendering (computer graphics) & Computer science. The author has an hindex of 5, co-authored 9 publications receiving 502 citations.

Papers
More filters
Journal ArticleDOI

HDR image reconstruction from a single exposure using deep CNNs

TL;DR: This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.
Journal ArticleDOI

HDR image reconstruction from a single exposure using deep CNNs

TL;DR: In this article, a deep convolutional neural network (CNN) is proposed to predict information that has been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure.
Journal ArticleDOI

FovVideoVDP: a visible difference predictor for wide field-of-view video

TL;DR: FovVideoVDP as mentioned in this paper is a video difference metric that models the spatial, temporal, and peripheral aspects of perception, which is derived from psychophysical studies of the early visual system, which model spatio-temporal contrast sensitivity, cortical magnification and contrast masking.
Journal ArticleDOI

A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution

TL;DR: A perceptual visual model is proposed that predicts the quality of motion given an object velocity and predictability of motion, and an on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion is demonstrated.
Journal ArticleDOI

Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering

TL;DR: The parameter space of the proposed technique is explored and it is demonstrated that its perceived quality is indistinguishable from full-resolution rendering, an attractive alternative to reprojection and resolution reduction of all frames.