Y
Yang Hua
Researcher at Queen's University Belfast
Publications - 86
Citations - 1821
Yang Hua is an academic researcher from Queen's University Belfast. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 16, co-authored 69 publications receiving 1240 citations. Previous affiliations of Yang Hua include French Institute for Research in Computer Science and Automation & Panasonic.
Papers
More filters
Proceedings ArticleDOI
IEGAN: Multi-Purpose Perceptual Quality Image Enhancement Using Generative Adversarial Network
TL;DR: IEGAN is presented, a versatile framework capable of inferring photo-realistic natural images for both artifact removal and super-resolution simultaneously and a new loss function consisting of a combination of reconstruction loss, feature loss and an edge loss counterpart is proposed.
Proceedings ArticleDOI
Object-Adaptive LSTM Network for Visual Tracking
TL;DR: This paper proposes a novel object-adaptive LSTM network, which can effectively exploit sequence dependencies and dynamically adapt to the temporal object variations via constructing an intrinsic model for object appearance and motion and develops an efficient strategy for proposal selection.
Proceedings ArticleDOI
Analysis of the packet transferring in L2CAP layer of Bluetooth v2.x+EDR
Yang Hua,Yuexian Zou +1 more
TL;DR: This paper proposes a packet selection strategy in baseband layer to achieve the maximum throughput for L2CAP basic mode in different channel SNRs and applies it to transfer L2 CAP PDUs by 3-DH5 baseband packet for L 2CAP retransmission mode.
Posted Content
Instance Cross Entropy for Deep Metric Learning
TL;DR: This work proposes instance cross entropy (ICE), which measures the difference between an estimated instance-level matching distribution and its ground-truth one, and rescales samples' gradients to control the differentiation degree over training examples instead of truncating them by sample mining.
Posted Content
Ranked List Loss for Deep Metric Learning
TL;DR: In this article, the authors propose a ranked list loss to solve the problem that only a fraction of data points are incorporated to build the similarity structure and some useful examples are ignored and the structure is less informative.