scispace - formally typeset
Z

Zijian Zhang

Researcher at Leibniz University of Hanover

Publications -  22
Citations -  169

Zijian Zhang is an academic researcher from Leibniz University of Hanover. The author has contributed to research in topics: Computer science & Synchronization (computer science). The author has an hindex of 3, co-authored 9 publications receiving 53 citations.

Papers
More filters
Journal ArticleDOI

Dissonance Between Human and Machine Understanding

TL;DR: A large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding, through the lens of an image classification task, to answer the following questions: which (well performing) complex ML models are closer to humans in their use of features to make accurate predictions.
Proceedings ArticleDOI

Explain and Predict, and then Predict Again

TL;DR: This article proposed to use multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses for optimizing the task performance, which has been shown to substantially outperform existing approaches.
Journal ArticleDOI

Dissonance Between Human and Machine Understanding

TL;DR: In this paper, the authors present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding, through the lens of an image classification task, and seek to answer the following questions: which (well-performing) complex ML models are closer to humans in their use of features to make accurate predictions? How does task difficulty affect the feature selection capability of machines in comparison to humans? Are humans consistently better at selecting features that make image recognition more accurate?
Proceedings ArticleDOI

Explain and Predict, and then Predict Again

TL;DR: This article proposed to use multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses, and then use another prediction network on just the extracted explanations for optimizing the task performance.
Posted ContentDOI

FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop

TL;DR: FaxPlainAC as discussed by the authors is a tool that gathers user feedback on the output of explainable fact-checking models, i.e., whether the input fact is true or not, along with the supporting/refuting evidence considered by the model.