scispace - formally typeset
L

Li-Heng Chen

Researcher at University of Texas at Austin

Publications -  16
Citations -  328

Li-Heng Chen is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Video quality & Artificial neural network. The author has an hindex of 5, co-authored 13 publications receiving 182 citations. Previous affiliations of Li-Heng Chen include National Taiwan University.

Papers
More filters
Journal ArticleDOI

Image Quality Assessment Using Human Visual DOG Model Fused With Random Forest

TL;DR: Experimental results show that the random forest regression model trained by the proposed DOG feature is highly correspondent to the HVS and is also robust when tested by available databases.
Proceedings ArticleDOI

A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

TL;DR: A large-scale comparative evaluation is conducted to assess the capabilities and limitations of multiple temporal pooling strategies on blind VQA of usergenerated videos and proposes an ensemble pooling model built on top of high-performing temporal Pooling models.
Journal ArticleDOI

ProxIQA: A Proxy Approach to Perceptual Optimization of Learned Image Compression

TL;DR: A proxy network is constructed, broadly termed ProxIQA, which mimics the perceptual model while serving as a loss layer of the network and is able to demonstrate a bitrate reduction of as much as 31% over MSE optimization, given a specified perceptual quality (VMAF) level.
Journal ArticleDOI

Learning to Distort Images Using Generative Adversarial Networks

TL;DR: This work uses a conditional generative adversarial network (cGAN) which is trained to learn four kinds of realistic distortions and experimentally demonstrates that the learned model can produce the perceptual characteristics of several types of distortion.
Journal ArticleDOI

Perceptual Video Quality Prediction Emphasizing Chroma Distortions

TL;DR: In this paper, a subjective experiment was carried out to measure subjective video quality on both luma and chroma distortions, introduced both in isolation and together, and the subjective scores were evaluated by 34 subjects in a controlled environmental setting.