J
Joe Yuchieh Lin
Researcher at University of Southern California
Publications - 11
Citations - 582
Joe Yuchieh Lin is an academic researcher from University of Southern California. The author has contributed to research in topics: Video quality & Subjective video quality. The author has an hindex of 8, co-authored 10 publications receiving 349 citations.
Papers
More filters
Proceedings ArticleDOI
MCL-JCV: A JND-based H.264/AVC video quality assessment dataset
Haiqiang Wang,Weihao Gan,Sudeng Hu,Joe Yuchieh Lin,Lina Jin,Longguang Song,Ping Wang,Ioannis Katsavounidis,Anne Aaron,C.-C. Jay Kuo +9 more
TL;DR: It is demonstrated by experimental results that the proposed JND analysis performed in the difference domain, called the D-method, achieves a lower BIC (Bayesian information criteria) value than the previously proposed G-method.
Journal ArticleDOI
Mcl-v
TL;DR: A high-definition video quality assessment (VQA) database that captures two typical video distortion types in video services (namely, "Compression" and "compression followed by scaling") is presented in this work.
Journal ArticleDOI
Statistical Study on Perceived JPEG Image Quality via MCL-JCI Dataset Construction and Analysis.
Lina Jin,Joe Yuchieh Lin,Sudeng Hu,Haiqiang Wang,Ping Wang,Ioannis Katsavounidis,Anne Aaron,C.-C. Jay Kuo +7 more
TL;DR: A new image quality database, MCL-JCI, is constructed and introduced and it is observed that people can only differentiate a finite number of quality levels and the perceived quality plot is a stair function of the coding bit rate.
Journal ArticleDOI
A ParaBoost Method to Image Quality Assessment
TL;DR: An ensemble method for full-reference image quality assessment (IQA) based on the parallel boosting (ParaBoost) idea is proposed, which outperforms existing IQA methods by a significant margin.
Proceedings ArticleDOI
A fusion-based video quality assessment (fvqa) index
TL;DR: This work studies the visual quality of streaming video and proposes a fusion-based video quality assessment (FVQA) index, where fusion coefficients are learned from training video samples in the same group to predict its quality.